« January 2007 | Main | March 2007 »

February 28, 2007

Some Like It Hot

Just as the hundred year old technology of the incandescent light bulb is losing market share to energy efficient light sources, such as compact fluorescent lamps and light emitting diodes, General Electric is fighting back with an improved light bulb [1]. GE's press release gives few details beyond the fact that this high efficiency incandescent lamp (HEI™) will reach the market in 2010, and it will have about twice the luminous efficiency of a standard incandescent light bulb (30 lumens/watt vs 10-15 lumens/watt). GE claims that with future advancements, the efficiency will approach that of compact fluorescent lamps. As stated in the press release, GE has invested more than $200 million in the last four years on the development of energy efficient lighting.

So what's the enabling technology for this advance? The press release just states that it's "innovative new materials." I'll offer two possibilities.

• Hotter Filament and a Phosphor - According to the Planck law of blackbody radiation, a hotter filament will push more non-visible infrared radiation into the visible spectrum. Unfortunately, it will also push some of the visible spectrum into the ultraviolet. The ultraviolet can be used to excite a phosphor to produce visible light. This is not as easy as it sounds, since most phosphors lose efficiency as temperature is increased.

• Tungsten Photonic Filament - Research at Sandia National Laboratories several years ago [2] showed that fabricating a filament in a photonic crystal pattern can increase its luminous efficiency tremendously - theoretically up to 60% conversion of electrical energy into light. The photonic lattice allows passage of just visible wavelengths of light. The infrared energy remains trapped at the filament to maintain its temperature. The practical difficulties here include the fabrication of the photonic crystal pattern of the filament, and the elimination of "hot spots" (high resistance areas) in the structure that would reduce lifetime.

References:
1. GE Announces Advancement in Incandescent Technology (Press Release, February 23, 2007).
2. Tungsten photonic lattice developed at Sandia changes heat to light.

February 27, 2007

Atomically Thin Drum Heads

If you take a sheet of a high modulus material, such as steel, mount it between two supporting bars, and tap it lightly with a hammer, it will ring at a characteristic frequency that depends on the modulus of the material and the distance between the supports. A smaller spacing between supports will give a higher frequency, and the sheet will ring for a longer time if it is thin, since it takes energy to excite the material bulk through its thickness. If you really tried, how thin could you make the sheet, and how close could you make the supports?

Scott Bunch, a graduate student at the Cornell Center for Materials Research of Cornell University, decided to make such a resonator just a single atom thick [1]. Such an experiment would not be possible were it not for the arrangement of atoms in the material he used, graphite. Graphite, a specific crystal form of carbon, has carbon atoms bound tightly in planes, but the atomic forces holding the planes together are very small. This arrangement of atoms works well in "lead" pencils, which are really made of graphite, since rubbing will transfer material easily from the pencil to paper, but the graphite holds itself together on the paper.

Bunch glued a small piece of graphite to the end of a toothpick and rubbed it onto the surface of an oxidized silicon wafer he had prepared to have 300 nanometer deep trenches at one micrometer intervals. The graphite deposited sheets of various thickness over the trenches, and he used microscopy to identify areas of interest, and these were investigated further using atomic force microscopy (AFM) and Raman spectroscopy. Some sheets were found to be a single atomic layer thick, essentially a single "molecule" of carbon called graphene.

Although Bunch's fabrication technique is crude, it did prove his point. He was able to excite resonance in these sheets both optically and electrically, and he measured the resonant response using optical interferometry. The resonant frequencies were in the megahertz range. In one given example, the resonance was at 71 MHz, with a Q-factor (the ratio of the resonance frequency to the resonance width) of 78. Such high frequencies are a result of the high stiffness of graphene, one of the stiffest materials known.

Are there any practical applications? The material could be useful for weighing small molecules, since the added mass will change the resonant frequency. Since carbon itself is a light element, small molecules could be detected easily. The measured force sensitivity of the graphene sheets was about a femto-Newton per square-root-Hertz.

References:
1. J. Scott Bunch, et al., "Electromechanical Resonators from Graphene Sheets," Science, vol. 315. no. 5811 (26 January 2007), pp. 490-493
2. Lauren Gold, "Thin but tough: Graphene sheets could have many uses, Cornell student discovers." (Cornell News)

February 26, 2007

Not So Elementary, My Dear Watson

There are two major motifs in physics. The first is the discovery of the basic laws of Nature. The other is the discovery of the basic building blocks of matter - the elementary particles. However, the search for such particles seems to be never-ending. About a hundred years ago, the atom was considered to be elementary, so there were as many elementary particles as elements in the Periodic Table. Shortly thereafter, atoms were found to be divisible into protons, neutrons and electrons. Here was real progress, since about a hundred elementary particles had been reduced to just three. The celebration was short-lived, as a veritable zoo of subatomic particles was discovered.

The last few decades of the twentieth century saw the emergence of the Standard Model, which tries to organize these particles. Presently, there are eighteen elementary particles - six quarks, six leptons, plus six bosons - and their antimatter counterparts, although the neutrino and antineutrino may be same particle. Confused? So are most physicists. In their continuing quest at understanding the elementary particles, physicists are building larger and larger particle accelerators ("atom smashers"), the current example being the Large Hadron Collider.

Burton Richter, Nobel Laureate and Emeritus Professor at Stanford University, was involved in this search for elementary particles, and he's credited as co-discoverer of one particle, in 1974, called J/psi. He was Director of the Stanford Linear Accelerator Center (SLAC) from 1984 - 1999. Richter gave his view on the future of particle physics at the annual meeting of the American Association for the Advancement of Science (February, 2007) [1]. He outlined the key problems in his field; namely, the failure of the Standard Model to predict the dark matter and dark energy that comprise ninety-six percent of the universe, and the solar neutrino problem. Neutrinos from the sun apparently change from one type to another on their way to the earth, and they may have some, albeit small, mass. Neutrinos having mass is a quite unexpected result. Richter hopes that the Large Hadron Collider, which is possibly the last large particle accelerator to be built for some time, will discover the Higgs boson, a particle predicted by the Standard Model to give mass to the other elementary particles. The Large Hadron Collider may also determine whether the extra space dimensions predicted by String Theory actually exist.

I summarized the field of particle physics in a limerick I wrote in 1997 for a contest sponsored by the American Physical Society [2],

Hadrons, leptons, bosons, too,
Are members of our little zoo.
Though in their stalls
As little balls,
They're really clouds of quantum goo.

Dr. John H. Watson, Sherlock Holmes' sidekick summarized Holmes' scientific credentials as follows [3]: "Astronomy: nil... Knowledge of Chemistry: profound."

References:
1. Nobel laureate Burton Richter to speak about future of particle physics (Stanford University Press Release).
2. American Physical Society Limericks.
3. Quoted by Philip Beaman in Nature, vol. 445, issue 7128 (8 February 2007), p. 593.
4. Burton Richter's Faculty Profile.
5. Stanford Linear Accelerator Center.
6. Wall Chart: The Standard Model of Fundamental Particles and Interactions

February 23, 2007

Machine Learning

Yesterday (2/22/2007), I listened to a Honeywell Invitational Lecture presentation on Machine Learning by David L. Waltz, Director of the Center for Computational Learning Systems at Columbia University. Waltz's talk was primarily a summary of research presented at a recent Artificial Intelligence conference on "Predicting Electricity Distribution Feeder Failures Using Machine Learning Susceptibility Analysis." [1] This research was funded by Consolidated Edison (ConEd) as a way to improve the reliability of the New York City electrical system. This funding was likely a reaction to the public outcry over chronic power supply problems in the ConEd system [2].

What was interesting about Waltz's talk was the differences between the computer science and "real" science approaches to problems. Waltz's machine learning systems make very good predictions about the performance of a complex system, but the actual reason why the predictions are so good are unknown. He stated that one problem in the technology transfer of his prediction system to ConEd was that, since he couldn't explain why it worked, he wasn't able to convince the end-user that it actually would work. Waltz also explained how some approaches to partitioning data sets worked very well, but his research team had no explanation as to why this happened. He mentioned that one team member had invoked a "Heisenberg uncertainty principle" in which looking too closely at the data gives worse results. This is certainly not the model-building approach used by scientists, and it reminded me of the Automatic Computer Science Paper Generator placed online by some students a few years ago.

All this notwithstanding, Waltz did make an important statement about machine learning. Machine learning can replace conventional algorithmic programs in most control systems. Consider, for example, the engine control unit (ECU) in an automobile. It's an algorithmic system built on a model of how an engine should work. It accepts input from various vehicle sensors, such as the oxygen sensor and manifold vacuum sensor, and adjusts fuel injection and other parameters to give a good trade-off between performance and efficiency. Using Waltz's approach, it would be better to use a machine learning system with the same sensor inputs and control outputs. First of all, this would save a lot of work in building the original model and writing the algorithmic program. You would then drive the automobile for a few hours at various speeds, up hills and on straightaways, to teach the automobile how to drive. At the start, your travel would be somewhat erratic, stopping and starting in fits, perhaps even stalling-out, but the automobile would drive better and better as time went on. At a certain point, you could clone the system for that one automobile and implement it on all similar automobiles. More importantly, the system would adapt to the peculiarities of each individual vehicle, and further adapt as the vehicle components aged. Perhaps it would even eliminate that pesky "check engine" light.

I concede that there's a place for both the scientific, model-building, approach, and the computer science, "let's see if it works," approach, depending on the application; but I would prefer to have an algorithmic program running a nuclear reactor.

References:
1. Philip Gross, et al., "Predicting Electricity Distribution Feeder Failures Using Machine Learning Susceptibility Analysis," Proceedings of the Eighteenth Innovative Applications of Artificial Intelligence Conference (July 16, 20 2006, Boston, Massachusetts) Available as a PDF file here.
2. Sewell Chan and Craven McGinty, "Blackout Area Has a History of Breakdowns" (New York Times, July 26, 2006).
3. SCIgen-An Automatic CS Paper Generator.

February 22, 2007

Couch Potato Hero

Several inventors are responsible for the invention of the television. First, there was John Logie Baird (1888 - 1946), a Scottish engineer who demonstrated an electromechanical television. Then there were Vladimir Zworykin (1889 - 1982), a Russian-American, and the American Philo Taylor Farnsworth (1906 - 1971), who demonstrated electronic television systems. After these three, there were incremental improvements, such as color, stereo sound, and "instant on," the great energy waster of the 1970s. Many would argue that the greatest improvement in television technology was the remote control. The co-inventor of the television remote, physicist Robert Adler, died last week (2/15/2007) at age ninety-three [1-4].

Adler earned a doctorate in physics from the University of Vienna in 1937. He emigrated to the Unites States, where he joined the research division of Zenith Electronics in 1941. Adler worked on military communications systems during World War II. After a successful career spanning many decades, he retired from Zenith as Vice President and Director of Research in 1979, but served as a consultant until 1999 when Zenith merged with LG Electronics. Zenith, the last company to manufacture televisions in the United States, finally succumbed to the foreign domination of consumer electronics.

The first television remote, introduced in 1950, was connected by wire to the television, a tripping hazard. An actual radio controller, although possible with the technology of the time, was not considered to be viable in a remote, since the radio signals could pass through walls and activate a neighbor's television. A Zenith engineer, Eugene Polley, created the first wireless controller. Polley's remote, the "Flash-Matic," used a light beam in the handheld unit to activate photodetectors in the television. The performance of this remote was not ideal, since the television was also triggered by sunlight and room lights, but nearly 30,000 were sold following its introduction in 1955. Polley's unit needed a battery, and the Zenith marketing people didn't like batteries. When the remote battery died, some people thought the television needed repair - a common enough occurrence in the vacuum tube era.

It was Polley who suggested that ultrasonic could be used, and Adler was assigned to the project. Adler's solution was a remote without a battery. A trigger mechanism would strike aluminum rods that would ring in the ultrasonic. The different frequencies of shorter and longer rods were detected at the television. There was some interference from jangling keys and spilled coins, but Zenith sold more than nine million of the remotes between 1956 and 1982. Today, remotes are based on infrared signaling, a technology not practical in the 1950s. Since Polley had developed the "Flash-Matic," and both Polley and Adler were involved with the ultrasonic remote, they are credited as co-inventors of remote technology. Adler and Polley were awarded an Emmy in 1997 by the National Academy of Television Arts and Sciences.

Robert Adler is a listed inventor on more than 180 patents. Another of his fields of expertise was surface acoustic wave (SAW) devices, which have manifold applications in television receivers and cellular telephones. He also developed sensitive UHF amplifiers used in Radio Astronomy. Adler was a member of the National Academy of Engineering, and he was a fellow of the American Association for the Advancement of Science. His wife, Ingrid, says he didn't watch much television. "He was more of a reader."

References:
1. Patricia Sullivan, "Robert Adler, 93; Engineer, Co-Inventor of TV Remote Control." (Washington Post)
2. TV remote control's co-inventor dies. (Gulf News)
3. Shannon Dininny, "Inventor of the TV Remote Dies." (Associated Press)
4. Robert Adler (Wikipedia)

February 21, 2007

To Rose the Nose

No, Shakespeare didn't write that, but if you thought he did, you've homed-in on what may have been a big factor in Shakespeare's popularity. Scientists at the University of Liverpool have used electroencephalograms (EEGs) on subjects reading Shakespeare to show that Shakespearean language causes us to think extra hard [1]. A phrase such as, "To rose the nose," uses a linguistic form called functional shift; in this case, using a noun as a verb. Reading such a phrase causes a peak in brain activity as we try to understand what's being said.

Philip Davis, an English Professor also at the University of Liverpool, summarizes this effect when he says, "By throwing odd words into seemingly normal sentences, Shakespeare surprises the brain and catches it off guard in a manner that produces a sudden burst of activity - a sense of drama created out of the simplest of things." Shakespeare maintains our interest by forcing our brains to work harder. This result is also making the scientists work harder, since they intend to use the advanced brain imaging techniques of Functional Magnetic Resonance Imaging (FMI) and Magnetoencephalography (MEG) to find which areas of the brain are most affected.

While on the topic of Shakespeare, a report in the British Medical Journal [2] reviewed all 39 of Shakespeare's plays, along with three of his narrative poems, to summarize the physical effects of strong emotions in Shakespeare's writing. There were ten deaths from grief, and another twenty-nine deaths from other emotions. Eighteen cases of fainting occur, and thirteen near-fainting events. Shakespeare isn't all that dark - not all of these are on-stage events. Many are just mentioned by characters.

References:
1. Reading Shakespeare has dramatic effect on human brain
2. Strong emotions in Shakespeare's plays lead to fits and fatalities (British Medical Journal).

February 20, 2007

Red Hot Chili Peppers

Chili pepper is the fruit of the capsicum plant (Solanaceae Capsicum), a member of the nightshade (Solanaceae) family. The Nightshade family includes such diverse plants as belladonna (deadly nightshade), eggplant, potato, tobacco, tomato, and the petunia. Chili is universally cultivated because it contains the chemical capsaicin, technically known as methyl vanillyl nonenamide (C18H27NO3, or (4-hydroxy-3-methoxybenzyl)-8-methylnon-6-enamide, CAS number 404-86-4). Capsaicin is lipophilic (dissolves readily in fats and oils), and since it produces a burning sensation in the mouth when consumed in even small quantities, it is used as a cooking spice. Surprisingly, birds are unaffected by its taste, so they serve to propagate its seeds.

The hotness of chili peppers is measured on the Scoville scale, named after Wilbur Scoville, the American chemist who developed it. The Scoville number was originally the dilution of pepper preparation in sugar water required to mask its hotness, but this analysis is now done using high performance liquid chromatography (HPLC, sometimes called high pressure liquid chromatography). Bell pepper, a pepper with no hotness, rates a zero on this scale, Jalapeño pepper has Scoville numbers up to 8,000, and Cayenne pepper up to 50,000. The chili pepper, like its nightshade relatives, the potato, tomato and tobacco, was discovered in the New World, and it was introduced to Old World cuisine just five hundred years ago. Shortly after the discovery of America, chili cultivation spread across Europe, Africa and Asia.

A recent paper published in Science [1] by a fifteen-member team of archaeologists from the Smithsonian Museum and many universities has shown that chili peppers have been a part of the human diet for more than 6,000 years. Their research establishes chili pepper as the oldest New World spice, and it shows also that chili pepper was not just found in the wild and added to foods. It was cultivated along with corn, forming a food staple pairing that persists in Latin American cuisine to this day. This research was scientific detective work of the first order, and the evidence was from microscopic starch granules found in pits of ancient cooking implements. Linda Perry of the Smithsonian Institution found some unusual starch granules from a 6,100-year-old site in western Ecuador and tried to identify them. She found that they were distinct from the starch granules of wild chili peppers, since they were larger and had a central depression. They were instead a match to the starch granules of modern, domesticated chili peppers. Perry found matches to granules from seven other archaeological sites.

Scientific interest in the chili pepper is not just limited to Archaeology. New Mexico State University is the home of the Chile Pepper Institute. A professor there, Paul Bosland, did a study of the Bhut Jolokia chili pepper, found in the Assam region of northeastern India. The Bhut Jolokia, or "Ghost chili," does not fruit readily, so it took Bosland several years to get material for a Scoville test. He found that Bhut Jolokia ranks at more than a million Scoville units. On February 9, 2007, Guinness World Records declared the Bhut Jolokia chili pepper to be the world's hottest. Of course, Bosland, knew that already. A scientist just needs to look at the data.

The Red Hot Chili Peppers are an American musical group known for its fusion of rock, psychedelic, punk, funk, rap and heavy metal musical styles.

References:
1. Linda Perry, et al., "Starch Fossils and the Domestication and Dispersal of Chili Peppers (Capsicum spp. L.) in the Americas," Science, vol. 315, no. 5814 (16 February 2007), pp. 986 - 988
2. David Brown, "One Hot Archaeological Find - Chili Peppers Spiced Up Life 6,100 Years Ago." (Washington Post)
3. Heidi Ledford, "Ancient foodies liked it hot." (New Scientist Online)
4. Smithsonian scientists report ancient chili pepper history (Smithsonian Press Release, February 15, 2007).
5. Red hot chili pepper research spices up historical record (University of Calgary Press Release, February 15, 2007).
6. Hot, hotter, hottest: NMSU sets Guinness chili record (Free New Mexican).
7. Chile Pepper Institute.

February 19, 2007

Hash

There are several words that mean different things to technical and non-technical people. One of these is "mole;" another is "unionized." Then there's "hash." To the common man, hash is a comestible, a dish prepared with chopped meat, combined with vegetables, and served with gravy. The word hash derives from the French word, "hatcher," which means "to chop." A hash, or hash function, in the technical sense, is a way to provide a fingerprint to an electronic file. This fingerprint, or hash value, is generally a string of between twenty and thirty-two bytes. Changing only a single character in a file will change its hash value so completely that is bears no resemblance to the hash value of the original file. Importantly, it is infeasible to construct a file with the same hash value as a given file unless it copies the given file exactly, sequential character for sequential character, a property of hash functions called "anti-collision."

An important use of hash functions is the maintenance of login passwords. When you log onto a computer, the computer doesn't compare your typed password to a previously stored password. Instead, it compared the hash value of what you typed with the stored hash value of your password, and this protects your password. If someone gains access to the hashed password file on the computer, they still don't know your password, since the hash is a "one-way function." Knowledge of the hash value gives you no knowledge of the number and/or letter string that generated it. That's why a system administrator is able to reset your password to a new value, but he's not able to tell you your current password.

Why is the anti-collision property important? As an example, let's consider a sales contract with a negotiated price of $10,000. The seller decides to increase his profit by adding a single character, another zero, to make the price $100,000. This change will be reflected immediately in the hash value of the contract. Perhaps the seller realizes this, so he decides to add a few extra spaces, some non-printing characters, or perhaps a benign appendix that looks as if it was part of the contract, to generate the same hash value. Anti-collision makes this impossible.

Hash functions are so important to electronic commerce that they are standardized by the U.S. National Institute of Standards and Technology (NIST). As computer technology advances, especially the computation speed of computers, there is a need for better and better hash functions. The most used hash function, released by the U.S. National Security Agency (NSA) in 1995, is called SHA-1. SHA stands for "Secure Hash." Now, twelve years and many CPU cycles later, SHA-1 is starting to show it's age. There's always a trade-off between the speed of a hash function and its security. SHA-1 was designed to give reasonable security at the processing speed of computers in 1996. Now, twelve years later, cryptanalysts have discovered some weaknesses in the SHA-1 algorithm. A concerted effort may now allow generation of files with the same hash values, demolishing the anti-collision property.

Waiting in the wings is a similar, but more complex, hash function called SHA-256. Still, both SHA-1 and SHA-256 are based on principles established fifteen years ago, so it's time to rethink hash. In this effort, NIST is enlisting the aid of academic cryptographers by establishing a hash competition. This approach is not new. In 1997, NIST decided that the current encryption algorithm, called the Data Encryption Standard (DES), was becoming vulnerable to attack, so it started a competition for a replacement encryption algorithm. NIST chose a slightly modified version of an algorithm submitted by two Belgian cryptographers, Rijndael, for its Advanced Encryption Standard. There were fifteen submissions from ten countries in the AES competition. NIST sponsored two workshops on the requirements for a new hash function in 2005 and 2006. A draft of the minimum acceptability requirements, submission requirements, and evaluation criteria were published in the January 23, 2007, issue of the Federal Register. Submissions are due by fall 2008, and the standard will be announced in 2011. By the way, there is no prize in this competition, other than the envy of your peers, and hash immortality.

References:
1. Bruce Schneier, "An American Idol for Crypto Geeks." (Wired Magazine Online, Feb, 08, 2007)
2. Hash Function (Wikipedia).
3. Advanced Encryption Standard (Wikipedia).
4. Advanced Encryption Standard (PDF File).
5. NIST's Plan for New Cryptographic Hash Functions.
6. The SHA-1 hash value of this file (before I added this reference!) was 5158509721f10d91f729c080aed2f4263521bac2. If you have access to a Linux system, you can get the hash of any file using the command, openssl dgst -sha1 {filename). For a bytewise listing (51:58:50:97:21:f1:0d:91:f7:29:c0:80:ae:d2:f4:26:35:21:ba:c2), type openssl dgst -c -sha1 {filename).

February 16, 2007

ESP - Extra Sensory Princeton

Princeton University, founded in 1746, is a premier educational institution. Along with Harvard, Yale, and five others, it is a member of the Ivy League. It is also home to the Princeton Institute for the Science and Technology of Materials (PRISM), and has been a home to many famous scientists and mathematicians.

In this hotbed of academic and scientific excellence stands an unusual research laboratory, the Princeton Engineering Anomalies Research laboratory (PEAR). This laboratory has existed in the basement of Princeton's Engineering Building since its founding in 1979 by Robert G. Jahn, then Dean of Princeton's School of Engineering and Applied Science. Its purpose was to pursue a "rigorous scientific study of the interaction of human consciousness with sensitive physical devices, systems, and processes common to contemporary engineering practice." What this translates to is a study of "Mind over Matter," sometimes termed Extra-Sensory Perception (ESP). As can be imagined, this laboratory has its detractors both within and without the university. Robert L. Park, a physicist and author of the book, "Voodoo Science: The Road From Foolishness to Fraud," states that PEAR has been "... an embarrassment to science, and I think an embarrassment for Princeton. [1]" A definite lack of funding, possibly the result of such controversy, has forced the closing of the laboratory at the end of February, 2007.

Jahn, the founder of PEAR, was an expert on jet propulsion, and he used his cachet of scientific expertise to support the laboratory on private donations, only - no funding was solicited from Princeton University or the government. PEAR received donations of more than $10 million since its inception. James S. McDonnell, a friend of Jahn's, and a founder of the McDonnell Douglas Corporation, was a founding donor. Philanthropist Laurance Rockefeller visited PEAR and was a donor, also.

There were likely some "scientific" arguments for founding PEAR. Of course, Quantum Mechanics considers the observer to be an important part a system. In some cases, observing an experiment will determine its outcome. There is also the argument that a proper series of experiments will decide, once and for all, the question of whether ESP really exists. A 1986 paper by Jahn and Brenda J. Dunne, "On the Quantum Mechanics of Consciousness, With Application to Anomalous Phenomena," [2] explores the Quantum Mechanics link. It isn't as if PEAR stood alone in all this. The US Central Intelligence Agency, arguably a skeptical bunch, funded a similar program in the 1970s and 1980s on "Remote Viewing" at the Stanford Research Institute (SRI) [3]. There was a lengthy paper on this topic published in the Proceeding of the IEEE by the SRI group, but at that time the CIA involvement was not known [4].

A typical PEAR experiment involved participant's attempting to influence physical random noise sources. The noise sources are coupled to number displays, and they display the number 100, on average. Since the noise sources are truly random, the display shows numbers both above and below 100. The participants were instructed to try to get lower, or higher, numbers just by thinking about it. The PEAR analysis of the data appears to show an influence at the 200 - 300 ppm level, but the number of trials that can be done at a single sitting are indeed limited, and I, for one, don't believe the evidence to be too compelling. A Jahn says, "For 28 years, we've done what we wanted to do... If people don't believe us after all the results we've produced, then they never will." [1] It may be time to move to something more credible, like the Global Consciousness Project.

References:
1. Benedict Carey, "A Princeton Lab on ESP Plans to Close Its Doors" (New York Times, February 10, 2007).
2. PEAR Web Site
3. Robert G. Jahn and Brenda J. Dunne, "On the Quantum Mechanics of Consciousness, With Application to Anomalous Phenomena, Foundations of Physics, 16, No. 8 (1986), pp. 721-772. (PDF File).
4. H. E. Puthoff, "CIA-Initiated Remote Viewing At Stanford Research Institute."
5. H.E. Puthoff and R. Targ, "A perceptual channel for information transfer over kilometer distances: Historical perspective and recent research," Proceedings of IEEE, vol. 64, pp. 329-354 (1976).

February 15, 2007

Preservation - The 21st Century Engineering Challenge

An engineer's function is to make the world a better place in which to live. Fortunately, this generally coincides with making money for our employer, so we get paid for our efforts. It's surprising that we are not held in a higher esteem. The National Academy of Engineering (NAE) was founded in 1964, almost as a century's long afterthought to the founding of the National Academy of Sciences in 1863. The NAE has about 2,000 members, all elected by their peers, and all superstar engineers.

The NAE has launched the Engineering Challenges web site to solicit ideas as to what should be the "Grand Challenges for Engineering" for the twenty-first century. A committee [1] will review these ideas, and the final list of challenges will be presented in September 2007.

Former US President Jimmy Carter, who has some engineering credentials, has already posted his thoughts. He would like to see production of ethanol and bio-diesel fuels to replace oil, but he perceives the greatest challenge to be the growing chasm between the rich and poor. Eugene S. Meieran, a Senior Fellow at Intel Corporation, has listed the greatest innovations of the twentieth century, as follows [2]:

• Electrification
• Automobile
• Airplane
• Water supply and distribution
• Electronics
• Radio and television
• Agricultural mechanization
• Computers
• Telephone
• Air conditioning/refrigeration
• Interstate highways
• Space flight
• Internet
• Imaging
• Household appliances
• Health technologies
• Petrochemical technology
• Laser and fiber optics
• Nuclear technologies
• High-performance materials

It's not surprising that Honeywell has products in many of these areas. Meieran goes on to give his extrapolation to the twenty-first century [3].

• Energy conservation
• Resource protection
• Food and water production and distribution
• Waste management
• Education and learning
• Medicine and prolonging life
• Security and counter-terrorism
• New technology
• Genetics and cloning
• Global communication
• Traffic and population logistics
• Knowledge sharing
• Integrated electronic environment
• Globalization
• AI, interfaces and robotics
• Weather prediction and control
• Sustainable development
• Entertainment
• Space exploration
• "Virtualization" and VR
• Preservation of history
• Preservation of species

There are several things I don't like about Meieran's list. Many of the items could be combined, and the list is too shortsighted. It seems to be just a facile extrapolation of current trends, and it doesn't seem to project beyond a few decades. However, as Yogi Berra once said (along with Niels Bohr, and many others), "It's difficult to make predictions, especially about the future." What strikes me as overarching theme on this list is "preservation." We want to preserve our present high standard of living. At the same time, we need to preserve the quality of our environment, preserve our natural resources (animal, vegetable and mineral), preserve ourselves, and preserve our cultural heritage. I think Preservation is the grand challenge of this century.

References:
1. Committe members are William Perry, Alec Broers, Farouk El-Baz, Wesley Harris, Bernadine Healy, W. Daniel Hillis, Calestous Juma, Dean Kamen, Raymond Kurzweil, Robert Langer, Jaime Lerner, Bindu Lohani, Jane Lubchenco, Mario Mole­na, Larry Page, Robert Socolow, J. Craig Venter, and Jackie Ying.
2. George Constable, and Bob Somerville, "A Century of Innovation: Twenty Engineering Achievements that Transformed our Lives.," National Academies Press (Washington, D.C., 2003). Available online at http://books.nap.edu/catalog/10726.html.
3. Eugene S. Meieran, "21st Century Innovations."
4. The Engineering Challenges project is sponsored by a grant from the U.S. National Science Foundation, Award # ENG-063206

February 14, 2007

Benzene

Benzene (C6H6, a.k.a. 1,3,5-cyclohexatriene) is a common industrial solvent. Michael Faraday first isolated it from oil gas in 1833. Of course, the interesting thing about benzene is its ring structure, and chemical folklore says that benzene's ring structure was revealed in a dream to Friedrich Kekulé. His dream was about a snake biting its own tail, a symbol common in many cultures and known as the Ouroboros (Greek for "tail-eater"). However, two other chemists, Archibald Scott Couper (1831-1892) and Josef Loschmidt (1821-1895), have a substantial claim to the discovery; but Kekulé's dream is a nice story, so it persists.

Unfortunately, benzene is carcinogenic, so there is a major effort to remove benzene from all chemical processes. Benzene's Threshold Level Value (TLV), the concentration above which benzene is not considered safe, is just 0.5 ppm. Benzene is present in crude oil, so there's typically a trace of benzene in automotive fuels. The U.S. Environmental Protection Agency has now acted to have refineries remove benzene from gasoline. This action was the result of pressure from environmental lobby groups in Oregon and Washington. The root cause of the environmental concerns in the Pacific Northwest is that the local refineries there refine Alaskan crude oil that has twice as much benzene, and this was concentrated in the refined gasoline. This has resulted in a level of benzene in Portland air up to forty times a safe level. It is estimated that removing benzene from automotive fuel will cost less than a half cent per gallon.

Even with the new EPA rule [2], it will still take until 2015 for the benzene in gasoline to be reduced to a third of the present levels. This will result in a 50% cut in benzene levels in the air in Oregon and Washington. Also in the EPA plan are improvements to portable gasoline cans to reduce spills and vapor escape to the atmosphere. Auto makers are also required to reduce emissions at automobile cold-starts. The EPA claims that the cost will be less than $1 per vehicle for this improvement. I suspect most of this will happen in the software in the Engine Control Unit.

References:
1. Michael Milstein, "A big sigh of relief for NW air." (The Oregonian)
2. Final Rule: Control of Hazardous Air Pollutants from Mobile Sources (signed February 9, 2007).

February 13, 2007

Itsy Bitsy Spider

Spider silk is nature's best mechanical material. It has an ultimate tensile strength of about 1.3 GPa, just below the highest strength steels. Most maraging steels have a UTS in the range of 1.6 - 2.5 GPa (230,000 - 360,000 PSI). Spider silk is much less dense than steel, so its strength/density ratio is five time greater. It's no wonder that there are efforts underway to produce artificial spider silk or substances that mimic its properties. Research in spider silk over the last few years has revealed that the unique properties of spider silk arise from a dense, ordered, nano-crystalline phase in a polymeric protein matrix.

A research group at MIT's Institute for Soldier Nanotechnologies (ISN) has developed a polymeric nanocomposite with properties similar to spider silk. This research by Gareth McKinley, Shawna Liff and Nitin Kumar appears in the current issue of Nature Materials. Their material is simply nanosized clay platelets, about 1 nm in size, in a polyurethane elastomer. The trick is the somewhat complicated process by which the hydrophilic clay nanoparticles are mixed into the hydrophobic polymer matrix. The clay is first dissolved in water, and then the water is replaced by fractional dilution with a solvent that also dissolves polyurethane. Polyurethane is then blended into the clay-solvent mixture, and the solvent is removed, leaving a nanocomposite of clay particles dispersed in a polyurethane matrix.

A commentary, "Nanocomposites: Stiffer by design," by Evangelos Manias of the Materials Science and Engineering Department of the University of Pennsylvania speculates that the combination of strength and elasticity of such materials will be useful for fabricating membranes for many purposes, including polymer fuel cells.

Itsy Bitsy Spider is a nursery rhyme from my childhood.

The itsy bitsy spider climbed up the water spout.
Down came the rain, and washed the spider out.
Out came the sun, and dried up all the rain
So the itsy bitsy spider climbed up the spout again.

References:
1. Shwawna M. Liff, Nitin Kumar, and Gareth McKinley, "High-performance elastomeric nanocomposites via solvent-exchange processing," Nature Materials Vol 6, No 1, pp 1-83 (abstract).
2. Nanocomposite research yields strong and stretchy fibers
Anne Trafton, "Nanocomposite research yields strong and stretchy fibers" (MIT Press Release, January 18, 2007).
3. Evangelos Manias, "Nanocomposites: Stiffer by design," Nature Materials Vol 6, No, 1 pp 9-11.

February 12, 2007

Surf's Up!

In my amateur scientist days in high school, I would frequent a local electronics surplus dealer who had a pipeline to an unnamed major defense and commercial electronics manufacturer. These were still the vacuum tube days, and one item I enjoyed buying was an electronic assembly with many miniature dual-triode vacuum tubes. These assemblies were very inexpensive because all the circuitry was embedded in foam to protect the it from vibration. To salvage anything required a laborious picking and scraping operation. These assemblies may have been part of a rocket or missile. A similar foam, TufFoam, developed by materials scientists at Sandia National Laboratories, is presently used for the same purpose. It's a closed-cell, water-blown, rigid polyurethane with densities as low as 0.033 grams per cubic centimeter.

What does all this have to do with surfing? The CTQs for surfboards include such requirements as high strength, light weight, and water resistance. Especially important is a high strength-to-weight ratio. These requirements were met traditionally by a supermalleable polyurethane foam manufactured by Clark Foam. Clark Foam's owner, Gordon "Grubby" Clark, started manufacturing surfboard cores in the sixties, capturing more than 80% of this market. This is no small business, since US surfboard sales are nearly $200 million a year. Unfortunately for the surfers, Clark Foam ceased operation on December 5, 2005, leaving surfers up the creek without a paddle.

In the same tradition as the Surfer-Biochemist, comes the Surfer-Chemist. LeRoy Whinnery, a chemist and surfer at Sandia National Laboratories, knew about TufFoam. He and his team at Sandia worked to adjust the density of TufFoam to make it suitable for the surfboard application. Sandia has created samples that possess all the properties of the original surfboard material, plus some beneficial additional features. Most importantly, TufFoan does not contain traces of toluene diisocyanate found in the original polyurethane foam.

The term, "dual use" is the designation for consumer technologies that are useful to the military. One example is the Pentium 4 computer chip which powers home PCs, but could also be used in military hardware. TufFoam is an example of dual use in the reverse direction, since it's a military material that found its way into a consumer product. This isn't the first time military technology has been applied to surfboards. Fiberglass and the original polyurethane foam were both pioneered by the military. Surfboards may only be the start, since TufFoam could be used as thermal and electrical insulation on other products. Its mechanical properties could be useful for automobile bumpers and aircraft wings.

References:
1. Sandia researchers develop low-density, environmentally friendly foam that may also be the answer to surf industry crisis (Original Sandia Press Release, February 13, 2006.
2. Heather Bourbeau, "Advanced Weaponry in the Pipeline." (Wired News)
3. Latest California News - Scientists ride wave of surf gear. (Sandia).
4. Nuke lab comes to aid of surf industry. (Science Blog, 2/14/06)
5. Sandia's low-density, environmentally friendly foam might save surfboard industry from a total wipeout. (Physicsorg.com)

February 09, 2007

Mr. Darwin's Pin Money

The Nobel Prize was first awarded in 1901, nineteen years after the death of Charles Darwin (1809 - 1882). It is likely that Darwin would have been nominated for the Nobel Prize in Physiology or Medicine. However, with all the controversy about his Theory of Evolution from his time to our own, he may not have been awarded the prize. The monetary award for a Nobel Prize today is $1.8 million.

How did scientists support themselves in the nineteenth century? No one at that time would even consider pursuing science, except if he came from a wealthy family. Darwin was a grandson of Josiah Wedgwood on his mother's side, so he did come from monied circles. His father, Robert Darwin, was a wealthy doctor. Science was considered to be more of a hobby than a livelihood, and Darwin's father expected him to follow in the medical profession. Charles was revolted by his medical school experiences, so his father steered him into study to become a clergyman so he would have a proper job as a Anglican parson. During his college studies, he was mentored by Reverend John Stevens Henslow, a professor of botany, and that started him on his career as a scientific naturalist.

So, did Darwin abandon the good life of his wealthy family to become a starving scientist? Well, not exactly. Recently, it was found that a photograph of Darwin in a library at Christ's College, Cambridge University, contained one of Darwin's checks in its frame [1]. The check, for £100, was drafted by Darwin to himself on March 21, 1872. This was not a small amount, since £100 in 1872 corresponds to $11,750 today [2, 3].

John van Wyhe, a science historian, thinks this withdrawal from a London bank was for use as household money at Darwin's permanant residence in the rural village of Downe, in Kent. Darwin was paid for publication of an American edition of The Descent of Man just a few days before, so he was flush with cash. Van Wyhw suspects that Darwin's son,
Francis, donated the check so that a signature could accompany the photograph. Darwin's signature on even mundane items, such as this check, is valued now at thousands of dollars. Van Wyhe is director of a Cambridge University project to put the complete works of Darwin online.

References:
1. Nicola Jones, "Darwin's cheque found in portrait frame" (Nature online).
2. Present Value of Currency.
3. British Pound to US Dollar Conversion.
4. The Complete Work of Charles Darwin Online (University of Cambridge).

February 08, 2007

Symmetry Breaking

One of the more interesting physical effects is the Einstein-de Haas Effect, demonstrated in 1915 in an experiment by Dutch physicist W. J. de Haas and Albert Einstein. I suspect that de Haas did the actual hands-on work. The existence of the electron had been proven in 1897 by J. J. Thomson in experiments with cathode ray tubes that also demonstrated that electrons were influenced by magnetic fields.

The experiment is quite simple. A cylinder of iron is suspended by a string inside a solenoid coil. Application of a current to any solenoid produces an axial magnetic field, and since there is no field gradient acting on the iron rod, the rod should remain in place. Also, since the apparatus is radially symmetric, nothing should cause the rod to rotate about its axis - but it does! The reason this happens is that there is asymmetry on the atomic level. Before the magnetic field was applied, the electron orbits should have been oriented more or less randomly. When the field is applied, the orbital moments of the electrons align with the field, and they impart angular momentum to the rod, which spins. You can likewise magnetize the iron rod in one direction using the solenoid coil, and reversing the current will cause a rotation.

The Einstein-de Haas Effect has been demonstrated by Researchers at the National Institute of Standards and Technology (NIST) in a different experiment [1]. They deposited a 50 nm thick film of permalloy (Ni80Fe20) on a cantilever and measured the torque on the cantilever in an oscillating magnetic field. The field bent the cantilever only a few nanometers, but the movement was detected by an optical fiber interferometer.

Of course, NIST is in the measurements business, and the purpose of the experiment was to devise a more accurate method of measuring the electron gyromagnetic ratio in magnetic materials. This ratio has been measured to great precision for free electrons (2.0023193043768), but it has different values when the electron is bound in atoms in materials. Knowledge of the ratio for magnetic materials would aid development of spintronic devices. In the Ni80Fe20 film, it had a value of 1.83±0.10. As you can see from the uncertainty, improvement of the technique is required.

Reference:
1. T.M. Wallis, J. Moreland and P. Kabos, "Einstein-de Haas effect in a NiFe film deposited on a microcantilever," Applied Physics Letters, Sept. 18, 2006.

February 07, 2007

Data, Data, Everywhere

Most of science is based on experiment. We carve out a small slice of nature, change some of its conditions, such as temperature and pressure, and record the changes. Some scientific disciplines don't have it that easy. Astronomers, for example, can't do experiments, they can only observe and infer information through the Copernican Principle. The Copernican Principle states that the laws of physics are the same everywhere in the universe, so astronomers are confident that they can relate their observations to what others have found in the laboratory. Economics, the dismal science, is the prime example of a non-experimental science. Economists are not able to pump billions of dollars into specific markets to see what happens, so they obtain their data through observation.

The internet has enabled another method of observational discovery called data mining in which large volumes of data are analyzed to discover unexpected correlations. A more specialized type of data mining, structured data mining, is used to extract information from databases to quantify a suspected correlation.

Digital photographs are automatically tagged with a data field specified by the exchangeable image file format (EXIF) that contains such information as the date and time a photograph was taken and the make and model of the camera. One structured data mining study searched EXIF data on Flickr, one of the popular photograph sharing web sites on the internet, to rank the most popular cameras [1].

In another interesting example of this technique, Jim Bumgardner found 35,000 photographs on Flickr that were tagged with the word "sunrise" and an additional 40,000 photographs tagged with the word "sunset." [2] His first observation, of course, is that sunset is slightly more popular than sunrise. Digging deeper, Bumgardner plotted the time of each photograph as a function of date and found the sunrise and sunset curves. There was scatter in the data, since not everyone lives at the same latitude, but since most of the world's photo-snapping populace lives in a narrow band of latitudes, the data are surprisingly good.

References:
1. Flickr photo uploads by camera model.
2. Jim Bumgardner's Time Graphs.
3. Database Mining (Wikipedia).
4. EXIF Data (Wikipedia).

February 06, 2007

Unsafe at Any Speed

Americans have a huge investment in their personal automobiles. This obsession was once called "America's love affair with the automobile," but we've reached the point where love is turning to hate. When you consider the cost of ownership - the capital cost of the automobile itself, interest payments on an auto loan, insurance, maintenance and fuel costs, not to mention the dollar value of the time lost in traffic congestion - we spend more on our automobiles than we do on our children. In addition to the monetary costs, there's also the cost in human injury and death. It was only after Ralph Nader's 1965 book, "Unsafe at Any Speed", that US automobile companies started to address some serious safety issues. Foreign automakers' innovations in the safety area have recalibrated the acceptable risk levels for all automobiles, and many additional safety systems, such as air bags, are the result.

The population of physicists is a very small subset of the population. For example, there are less than 45,000 members of the American Physical Society. It's a sobering fact how many famous physicists have been killed or injured in automobile accidents. The following list includes just the famous. Imagine how many not-so-famous lives have been impacted by the automobile:

Lev Landau, winner of the 1962 Nobel Prize for Physics for his theory of superfluidity in liquid helium II, was severely injured in a head-on collision with a truck in January of that same year and was in a coma for three months. Since it was expected that he would win the Nobel prize, extraordinary means were employed to keep him alive. Landau eventually succumbed to his injuries in 1968.

Eugene (Gene) Shoemaker, was one of the founders of the field of planetary science and co-discoverer of Comet Shoemaker-Levy 9, which collided with the planet Jupiter in 1994. His Ph. D. research at Princeton proved that the Barringer Meteor Crater in Arizona was caused by a meteor impact. Shoemaker died in 1997 in an automobile accident in Australia. The Lunar Prospector space probe carried some of his ashes to the Moon. Shoemaker is the only person buried on the moon.

Seymour Cray, renown developer of the eponymous supercomputers, died in 1996 as a result of head and neck injuries from a multi-car accident in Colorado Springs, Colorado. The Jeep Cherokee he was driving was designed on a Cray supercomputer.

Mary Ward, one of the first woman scientists and an accomplished microscopist, was likely the first automobile fatality. She died in 1869 when, as a passenger with her husband in an experimental steam-driven automobile, she suffered a broken neck after falling under its steel wheels.

Rudolf Karl Luneburg, an acknowledged visionary in ray and diffraction optics (the Luneburg Lens is named after him), died in an automobile accident in 1949.

William Shockley, co-inventor of the transistor and winner of the 1956 Nobel Prize for Physics, was seriously injured in an automobile accident in 1961. He seemed to physically recover from his injuries after several months, but there is speculation that this accident caused a personality disorder that affected the rest of his life. During his later life, he espoused racial superiority and became an embarrassment for Stanford University; and for Bell laboratories, for which he remained a consultant. He had become estranged from his children, who only learned of his death in 1989 from news reports.

Arnold Sommerfeld, a German physicist who was nominated eighty-one times for the Nobel Prize for Physics, died in 1951 after being struck by an automobile while walking with his grandchildren in Munich.

Max von Laue, winner of the 1914 Nobel Prize for Physics, died in 1960 from injuries sustained when a motorcycle collided with his automobile, causing it to roll-over on a motorway in Berlin.

John Schrieffer, winner of the 1972 Nobel Prize for Physics, is currently serving a two year prison sentence for causing an automobile accident while driving under a suspended license. The accident, in which Schrieffer fell asleep at the wheel, killed one person and injured many others. Schrieffer had nine speeding tickets prior to the accident.

February 05, 2007

Super Stiff Composite

Stiffness is a material's resistance to an applied force. It's the ratio of an applied force to the displacement caused by that force, and its usual units are newtons per meter. An infinitely stiff material exhibits no displacement to any applied force. Stiffness is proportional to Young's modulus when a body is under uniaxial tension or compression, but high stiffness does not always correspond to high strength. Many brittle materials have high stiffness, but they shatter quite easily.

Recently, a group of scientists from Washington State University, Ruhr-University Bochum (Germany), and the University of Wisconsin-Madison, have produced a composite of barium titanate (BaTiO3) and tin with a stiffness (not strength) greater than diamond [1]. The composite was formed by dispersing 100 micrometer particles of barium titanate in molten tin at 300oC. The dispersion was assisted by an ultrasonic probe. They used a 100 Hz magnetic actuator to apply a force to small bars of this material, and they monitored the bending with a laser as temperature was varied. Some of the composite specimens were found to be nearly ten times stiffer than diamond at a temperature between 58oC and 59oC.

What's happening? Barium titanate has a phase transition as it cools, so its volume is larger at room temperature. At that point, the tin is solid, so the particles are under high compressive strain. At the magic temperature between 58oC and 59oC, the stored strain energy is released, causing a apparent high stiffness. It's more of a trick than an improved material, but the composite may have some niche applications. Mark Spearing, a materials scientists at Southampton University (UK), speculates on two possible applications: The composite may be useful as a shock-resistant casing, or a tunable damper [2].

References:
1. T. Jaglinski,1 D. Kochmann,2 D. Stone,3 R. S. Lakes, "Composite Materials with Viscoelastic Stiffness Greater Than Diamond," Science, vol. 315. no. 5812 (2 February 2007), pp. 620 - 622.
2. Diamond loses its stiffness crown to new material (New Scientist Online)

February 02, 2007

Linus Pauling (Part II)

The previous post summarized the influence Linus Pauling had on Chemistry. Indeed, Pauling, who won the Nobel Prize in Chemistry in 1954, was primarily a chemist, but he was a metallurgist at heart. Listed below are his papers relating to metals published in the Proceedings of the National Academy of Sciences

Principles Determining the Structure of High-Pressure Forms of Metals: The Structures of Cesium(IV) and Cesium(V), PNAS 1989; 86: 1431-1433.

Factors Determining the Average Atomic Volumes in Intermetallic Compounds, PNAS 1987; 84: 4754-4756.

• (With Barclay Kamb) A Revised Set of Values of Single-Bond Radii Derived from the Observed Interatomic Distances in Metals by Correction for Bond Number and Resonance Energy, PNAS 1986; 83: 3569-3571.

Evidence from Bond Lengths and Bond Angles for Enneacovalence of Cobalt, Rhodium, Iridium, Iron, Ruthenium, and Osmium in Compounds with Elements of Medium Electronegativity, PNAS 1984; 81: 1918-1921.

Covalence of Atoms in the Heavier Transition Metals, PNAS 1977; 74: 2614-2615.

Metal-Metal Bond Lengths in Complexes of Transition Metals, PNAS 1976; 73: 4290-4293.

Valence-Bond Theory of Compounds of Transition Metals, PNAS 1975; 72: 4200-4202.

Maximum-Valence Radii of Transition Metals, PNAS 1975; 72: 3799-3801.

The Structure and Properties of Graphite and Boron Nitride, PNAS 1966; 56: 1646-1652.

A Theory of Ferromagnetism, PNAS 1953; 39: 551-560.

Electron Transfer in Intermetallic Compounds, PNAS 1950; 36: 533-538.

References:
1. Pauling Web Site at Oregon State University
2.Proceedings of the National Academy of Sciences of the United States of America

February 01, 2007

Linus Pauling (Part I)

Linus Pauling was a preeminent chemist of the twentieth century, and he was one of the first to apply quantum mechanics to Chemistry. He was awarded the Nobel Prize in Chemistry in 1954, and the Nobel Peace Prize in 1962 for his campaign about the health affects of above-ground nuclear testing. He just missed out on a Nobel Prize in Physiology or Medicine for the structure of DNA, since he was fixated on a triple helix. Of course, he didn't have Rosalind Franklin's xrays to steer him to a double helix.

In 1931, Pauling published a seminal paper in the Journal of the American Chemical Society, On the Nature of the Chemical Bond, that was later expanded into his famous book, The Nature of the Chemical Bond (ISBN 0801403332, Cornell University Press, 1939). His theory can be summarized in six rules for the shared electron chemical bond

• The electron-pair bond forms through the interaction of an unpaired electron on each of two atoms.
• The spins of the electrons have to be opposed.
• Once paired, the two electrons can not take part in additional bonds.
• The electron-exchange terms for the bond involves only one wave function from each atom.
• The available electrons in the lowest energy level form the strongest bonds.
• Of two orbitals in an atom, the one that can overlap the most with an orbital from another atom will form the strongest bond, and this bond will tend to lie in the direction of the concentrated orbital.

The first three of these rules predated Pauling, but the last three were a synthesis of his quantum mechanical studies.

I once attended a talk by Paul Emmett, a great chemist in his own right, who married Pauling's sister. Commenting on Pauling's ambition, he said that he and Pauling started out as graduate students together, but somewhere along the line he ended up working for Pauling. Pauling never received a high school diploma, since he lacked some credit hours. However, his high school was nice enough to give this double Nobelist a diploma 45 years later.

Reference:
1. Pauling Web Site at Oregon State University