Knoxville Police Detective Art Bohanan and ORNL's Michelle Buchanan examine fingerprints. Buchanan's research suggests that children's fingerprints don't last as long as adult fingerprints because of a difference in chemical composition.
While investigating the abduction and murder of an East Tennessee girl, Art Bohanan, a specialist with the Knoxville Police Department, encountered a phenomenon that had perplexed him before. Although witnesses saw the child enter the suspect's car, none of her fingerprints could be found anywhere inside the vehicle.
The suspect initially confessed but later recanted, making the absence of the victim's fingerprints a hurdle for the prosecution. He was convicted, but for Bohanan, a veteran of several grim criminal investigations involving children, the case reinforced a previous hunch: kids' fingerprints don't stick around the way adults' do.
Bohanan found his observation surprisingly fresh. Calls to contacts in the Federal Bureau of Investigation (FBI), the National Institute of Justice, Scotland Yard, and even a police friend in Russia turned up no evidence that the problem had been considered, much less studied. A letter from the FBI referred to it as an "area that needs to be explored."
Bohanan described his problem in a telephone call to ORNL Director Alvin Trivelpiece, who gathered 10 ORNL researchers to propose a solution. After the detective met with researchers, Michelle Buchanan of the Chemical and Analytical Sciences Division began a project that, as Bohanan puts it, "could lead us to all kinds of things down the road."
Buchanan enlisted a willing group, ages 4 to 17, to shake vials of alcohol between their thumb and forefinger to collect chemicals from their skin. She also took similar, noninvasive samples from adults, ages 19 to 46. Undergraduate students Jennifer Fletcher of Auburn University, Matt Johnson of North Dakota State University, and Scott Shultz of Transylvania University conducted gas chromatography-mass spectrometry tests on the samples in Buchanan's lab. The results confirmed the detective's hunch: Kids' fingerprints are different.
"We see a marked difference in the chromatograms," Buchanan said. "Children's fingerprints contain more volatile chemicals, such as free fatty acids, probably because they haven't gone through puberty yet. Adult prints display longer-lasting, higher molecular weight compounds such as long-chain alkyl esters of fatty acids."
Knowing the chemical difference in adults' and children's fingerprints is likely to lead to a test for latent juvenile fingerprints. Buchanan said that as her organic mass spectroscopy group identifies the compounds in the prints, Tuan Vo-Dinh of ORNL's Health Sciences Research Division is developing a computational method to enhance the visualization of fingerprints. The fact that the gas chromatographic profiles identified so many chemicals present in the skin has Buchanan theorizing that the research could lay the groundwork for new noninvasive diagnostic procedures.
"It has been reported in the literature that a number of compounds present in the skin's surface are indicators of some diseases," Buchanan said. "We hope to improve sampling techniques to develop new methods to detect target compounds that can tell us more about what's going on inside the body."
Detective Bohanan, an inventor of a method for lifting fingerprints, enthusiastically envisions what his police work and ORNL's research could lead to in solving crimes. "Forensic evidence is often lost or tainted because of delays in analysis or accidents along the way," he says. "I would also like to see this evolve into skin patch drug tests that could be used on the scene." For Bohanan, the medical applications that could result would be an especially gratifying bonus from a scientific pursuit of such sad origins.
ORNL Technique Can Screen for Carriers of Cystic Fibrosis Gene
A new ORNL-developed technique that could be used to rapidly screen many people for the defective gene that causes cystic fibrosis (CF) has been applied by ORNL and the University of Tennessee at Knoxville Medical Center (UTMC). In a test of samples from 30 persons who have normal or defective forms of the CF gene, the technique was 100% accurate, as reported in the journal Rapid Communications in Mass Spectrometry.
CF is an inherited fatal disease caused by a genetic defect. About 4% of Americans, mostly Caucasians, carry a defective form of the gene, which makes it the most common genetic defect of its severity in the United States. About 40,000 people in the United States have cystic fibrosis.
People with the disease suffer from respiratory and digestive disorders. Because their lungs become covered with a sticky mucus that promotes infection by bacteria, many CF patients require frequent hospitalizations and continuous use of antibiotics and other expensive medications. The total cost of caring for a typical person with cystic fibrosis, who has a median life expectancy of almost 30 years, is estimated at $250,000.
Because each person with CF is the child of parents who both carry defective forms of a particular gene, there is interest in large-scale screening to let people know their chances of having a child with CF.
The rapid screening technique was developed by C. H. (Winston) Chen and Steve Allman, both from the Photophysics Group of ORNL's Health Sciences Research Division, in conjunction with L.-Y. Ch'ang, MSchell, and C. Ringelberg, all of UTMC's Graduate School of Medicine, Department of Medicine, and Dr. Karal J. Matteson, a CF expert at UTMC's Graduate School of Medicine, Department of Medical Biology. They were assisted by K. Tang, a graduate student from Vanderbilt University. Other collaborators at ORNL include Bruce Jacobson, Mayo Uziel, K. L. Lee, M. Docktycz, G. B. Hurst, Scott McLuckey, Michelle Buchanan, and Richard Woychik. The ORNL work was sponsored by the internally funded Laboratory Directed Research and Development Program.
Nelli Taranenko, Winston Chen, and Steve Allman use the laser mass spectrometer to screen for the cystic fibrosis gene in a prepared hair sample.
"Our technique uses laser mass spectrometry," Chen said, "and this is the first time that mass spectrometry has been used to diagnose a genetic disease by DNA analysis. One advantage of this technique over conventional analysis by gel electrophoresis is speed--it's at least ten times faster because the whole procedure can be done in minutes, not hours. Another is that it does not use toxic chemical or radioactive materials, which require costly methods of disposal."
In this technique, laser mass spectrometry detects a common defect or mutation in the CF genethe lack of key genetic material. The absence of three pairs of chemical bases in a specific region of the gene on chromosome 7 is responsible for 70 percent of cystic fibrosis cases. Chemical bases are the building blocks of DNA, the blueprint for life; the particular sequence and number of these bases, which vary from gene to gene, determine a gene's function in carrying out a life process or transmitting a trait to the organism's offspring.
The causes of cystic fibrosis, long a mystery, are now becoming clear, thanks to advances in biology. Humans have a gene that manufactures a special protein--CFTR--that helps prevent the buildup of sticky mucus in the lungs. If the gene is defective, it causes cystic fibrosis.
Each gene is made up of two alleles, one from each parent. One correctly encoded allele of the CF gene is adequate for normal CFTR function. People who have a single defective allele are called carriers; they can pass the defect on to their offspring. Those with two defective CFTR alleles have cystic fibrosis.
If two carriers mate and have a child, the probability is 25% that the child will have cystic fibrosis. Thus, an accurate, fast screening technique would inform more couples of the likelihood that their future children would be born with CF.
The ORNL-UTMC technique can screen people to determine if they are normal, if they carry a defective CF allele, or if they have CF. For the experiment, the UTMC staff extracted DNA from human hair samples and isolated the part of each CF gene that would contain the known defect if present. They copied this segment millions of times using the polymerase chain reaction technique.
The UTMC staff sent to ORNL 30 samples, each a barely visible droplet, for a blind test. The ORNL scientists mixed each DNA segment with a chemical that absorbs laser light. The mixture was vaporized by ultraviolet light from a laser. The electrically charged DNA molecules formed in the vapor were detected in a time-of-flight mass spectrometer based on differences in size.
Because the segments of defective CF genes have fewer chemical bases, they are smaller and lighter than the segments from normal genes. The gene segment that causes CF is even smaller than the segment with one defective allele. Since a lighter segment travels faster than a heavier one between the sample plate and detector, the three types of DNA segments can be distinguished by differences in time of travel.
These differences are displayed on a computer screen as spectralines with peaks and valleys. This information indicates whether a person has a normal gene, a totally defective gene, or a gene with a defective allele, making that person a carrier. ORNL's identifications of the 30 samples agreed completely with the results of conventional analyses.
Genes for Jeans: Engineered Enzyme
Thanks to altered genes, it can stone wash jeans without stones and make them look better than ever. It can "eat" the paper wastes that occupy 40% of U.S. landfill space while removing ink from newspaper to make recyclable paper. It can convert wood to sugar, which can be turned into ethanol for fuel.
"It" is a new strain of bacteria discovered and altered by ORNL researchers. The altered bacteria rapidly produce cellulase, an enzyme used in fabric finishing detergents to smooth fabric, such as blue jeans, by removing puffy "pils" (knots that make cloth rough) from knitted material. Tests show that the bacterial enzyme produces a more attractive textile product than the commercially used acid cellulase fermented by fungi.
Craig Dees shows jeans stone washed conventionally by a fungal enzyme and jeans washed by a bacterial enzyme cellulase produced at ORNL. Tests show that the bacterial enzyme produces a smoother textile product with less backstaining by the colored dye than the commercially used acid cellulase fermented by fungi. Photograph by Bill Norris.
Craig Dees, a researcher in ORNL's Health Sciences Research Division who genetically altered the special bacteria, says that the bacterial enzyme has several advantages over the fungal enzyme. "The bacteria produce much more enzyme than fungi can in the same time," he says. "There is less back staining, or smearing of the blue dye, on the white areas of the jean cuffs. The bacterial enzyme can withstand a wider range of acidity levels and temperatures during textile processing. Our bacterial cellulase is not eaten by protease, a protein that may be added to detergents.
"The bottom line is that replacing acid cellulase with bacterial cellulase should save money. And stones are not needed with the ORNL enzyme to stone wash jeans!"
Dees says the bacterial enzyme has been tested also on the wood chips used for bedding in ORNL's Mouse House, which shelters 300,000 mice. The bedding is replaced every few days, and the old chips are discarded. About 12% of the trash in the Oak Ridge Reservation's landfill consists of mouse bedding.
"After immersing mouse bedding in a solution of engineered enzyme," Dees says, "we reduced its volume and weight by 50% in 8 days."
Tim Scott, head of ORNL's Bioprocessing Research and Development Center in the Chemical Technology Division, has immobilized the genetically altered bacteria on beads in a fluidized bioreactor. His experiments have shown that these bacteria effectively convert cellulose to wood sugar. Other bacteria can be used to turn this sugar into alcohols, including the liquid fuel ethanol that can be used to power automobiles.
The bacterium has also been shown to grow in high concentrations in a solution of compounds that are toxic to many bacteria, such as saccharinic acid, furfural, and cinnimyl alcohol. Thus, the bacterium and its enhanced enzyme will be useful in modifying industrial waste streams that contain cellulose, such as those from paper production.
Bird Reproduction Unaffected by Moderately Contaminated
Can birds that eat contaminated fish have as many young as birds that eat untainted fish? Since the 1960s, studies have shown that high levels of some environmental contaminants can adversely affect the health and reproduction of wildlife. However, according to a recently released study by ORNL, moderate levels of contamination have no apparent effect on reproduction in fish-eating birds.
Dick Halbrook, Glenn Suter, and Bradley Sample, scientists in ORNL's Environmental Sciences Division, have found no apparent difference in reproductive success between great blue heron or osprey that eat moderately contaminated fish and birds of the same species that feed on uncontaminated fish. However, in a related experiment they found that a mammal species, the mink, had fewer young after eating fish from contaminated streams (fish make up one-half of the mink's diet).
The great blue heron is a long-necked, fish-eating wading bird with a long tapering bill, large wings, and soft plumage. The osprey is a large fish-eating hawk. The mink is a semiaquatic carnivorous mammal that resembles a weasel and has partially webbed feet, a short bushy tail, and a soft thick coat.
A great blue heron in East Tennessee. Recent studies have shown that the reproductive success of great blue herons has not declined for those birds that eat moderately contaminated fish from Oak Ridge waterways. Photograph by Ron McConathy.
Fish in nearby Poplar Creek and the Clinch River, where the ORNL study was conducted, are contaminated with polychlorinated biphenyls (PCBs), mercury, and other heavy metals. PCB and mercury levels in fish from the Clinch River and Poplar Creek are less than those seen in fish from locations in Lake Michigan. Contaminants in fish from the Great Lakes are suspected to have adverse effects on populations of fish-eating wildlife. Halbrook reports that the eggs and chicks of the heron were found to contain PCBs and mercury, but their levels were less than those known to adversely affect other bird species. All heron chicks observed in this study were born normal and showed no defects. The number of offspring of the mink was lower than usual, but the young were normal.
For this study of ecological risk at a DOE site, Sample applied a pioneering ecological risk assessment method that Suter and Larry Barnthouse (formerly of ORNL) developed in the 1980s for the U.S. Environmental Protection Agency (EPA). EPA is currently using this method to assess the risk to the health of plants and animals of different types and levels of environmental contaminants.
The method considers several lines of evidence regarding the results and effects of contamination. Examples are the concentrations of each contaminant in the tissues of fish and of the birds eating them, damage to DNA in living cells and other bioindicators, reproductive success of birds and mammals exposed to contaminants, and the numbers and types of fish present.
Studies like the Oak Ridge investigation help test the validity of ecological risk assessment methods and models for predicting contaminant effects on plants, animals, and entire ecosystems. Such experimental results and field data allow risk assessors to correct their methods and models so that more accurate predictions can be made.
This project was conducted as part of the Comprehensive Environmental Response, Compensation, and Liability Act remedial investigation of the Clinch River and Poplar Creek and is being funded by DOE's Environmental Restoration Program. A primary objective of the remedial investigation was to determine whether contaminants pose a sufficient risk to human health or the environment to justify or necessitate cleanup actions.
Can Pond Sludge Be Mined for Useful Metals?
Studies of pond sludge from the DOE's Oak Ridge K-25 Site suggest there may be wealth in the waste. Tests by ORNL researchers show that, if the material is heated and cooled properly, a variety of minerals in the mud can be mined with a magnet. The remaining material is crushed glass, which could be inexpensively disposed of in a landfill.
Alex Gabbard and Charles Malone, both of the High Temperature Fuel Behavior Group in ORNL's Metals and Ceramics Division, conducted tests on surrogate sludge that contained a dozen nonradioactive metals and nonmetallic elements found in the actual sludge. These elements include aluminum, copper, iron, nickel, silicon, silver, sodium, and sulfur. The surrogate sludge did not contain uranium or other radioactive metals, which are present in the real material.
"Our experiments showed that cooking the pond sludge in graphite crucibles in an electrical resistance furnace has several effects," Gabbard says. "The volume of the material is reduced by more than two-thirds. The material is transformed into glass, or vitrified, in the shape of the crucible. And while a heated liquid, some valuable metals--mainly, iron, nickel, and copper--migrate to outer surfaces where they combine as 'gold spots' that can be easily separated magnetically from the rest of the material once it has hardened and is crushed. Recovering such useful metals is in the spirit of the Resource Conservation and Recovery Act."
When pond sludge from an Oak Ridge nuclear facility is heated in graphite crucibles in an electrical resistance furnace, glass globes covered with gold spots are formed. While the sludge is a heated liquid, some valuable metals--mainly iron, nickel, and copper--migrate to outer surfaces as gold spots. These metals can be easily separated magnetically from the rest of the material once it has hardened and is crushed.
Gabbard says the researchers have fully developed facilities to test actual pond sludge samples containing uranium or other waste materials (including those containing plutonium) to see if these elements also precipitate out of the mud as magnetic "gold spots" along with the iron, copper, and nickel. In Gabbard's vision, if this technique actually could mine uranium and plutonium from waste, the nuclear material extracted could be stored for potential use as fuel for nuclear power plants while solving a long-term waste problem.
The researchers do not yet understand why iron, nickel, copper, and sulfur in the sludge form golden globules that are not strongly attached to the rest of the material. Scanning electron micrographs show that the numerous iron-nickel-copper globules that make the black glass sparkle are surrounded by a sulfur skin. Gabbard thinks this skin may keep the spherical globules from bonding with the molten mud as it solidifies.
The formation and separation of the globules, Gabbard says, are likely due to the oxygen-free conditions created by the treatment method. To dry the sludge, an oxygen purifier was used to remove oxygen, and the sludge was heated to 1150°C in a nonoxidizing, helium atmosphere in the furnace. By contrast, in situ vitrification processes for turning radioactively contaminated soil into glass in tests at ORNL waste burial grounds add oxygen to the vitrified material.
The researchers also found that the best containers for the sludge were made of graphite, not ceramics. The ceramic crucibles became quite brittle and fractured during the heating process, but the graphite crucibles were not visibly affected and were reusable.
Pond sludge at the Oak Ridge K-25 Site is being stored in drums. DOE is evaluating a plan to dry the sludge, mix it with concrete, and repackage it in new stainless steel drums and overpacks. The packaged material would then be placed in storage facilities and indefinitely monitored. The cost of the project is estimated at from $90 million to $147 million.
The research was supported by the Environmental Management and Restoration Program led by J. M. Kennerly at the K-25 Site. The source of its funding is DOE's Office of Environmental Management.
While acknowledging the high cost of vitrifying the sludge by electrical resistance heating, Gabbard suggests that vitrification may be an economical approach in the long run.
"The metals in the mud are a resource that could be sold for industrial use," he says. "If uranium can be recovered by this technique, it could be sold for energy production. After the metals are separated out, the remaining crushed glassy material could be reclassified as a waste material that requires low-cost disposal. This approach eliminates the need for long-term monitoring. "
Computer Code Predicts Cancer Risk for Bone Marrow Irradiation
A computer code for predicting the risk that a patient will develop cancer or die within 30 days of radiation treatment of bone marrow has been developed at ORNL. The code can be used to help plan a safe course of treatment for patients needing bone marrow transplants to boost or replace their immune systems or radiation therapy to kill cancerous cells.
Called MarCell (from marrow cell), the code was originally developed for the Department of Defense (DOD) and the North Atlantic Treaty Organization (NATO) to predict the survival rate for soldiers exposed to radiation for weeks or months during nuclear war. MarCell was developed by Troyce Jones, a researcher in Health Sciences Research Division; Max Morris, a research statistician in the Computational Physics and Engineering Division; and Jafar Hasan, who spent an undergraduate science semester with Jones. Hasan, who was sponsored by DOE and ORNL under the Great Lakes Colleges Association/Associated Colleges of the Midwest Oak Ridge Science Semester, was graduated recently from Albion College in Michigan. He has continued to work on the code as a consultant to ORNL.
"The development of the code led to a revolutionary finding about the nature of bone marrow," Jones says. "We found that the cells that were most radiosensitive, the stem cells, seem to be considerably less important than previously believed. We have found that the cells that are the most important--in fact, critical to the production of blood cells--are cells of the marrow stroma including fibroblasts.
"Fibroblasts and even stromal cells have traditionally been considered the least important cells in the complex process of blood formation. Our calculations support recent experimental evidence that stromal cells are critical to this process."
Besides DOD's Defense Nuclear Agency and the NATO applications, MarCell's capabilities are of considerable interest to the National Aeronautics and Space Administration (NASA). NASA's Langley Research Center in Hampton, Virginia, will be using MarCell to determine the risk to astronauts posed by radiation doses from solar flares and cosmic rays in outer space. Astronauts are exposed to space radiation during shuttle flights and work on orbiting space laboratories. NASA plans to use MarCell to determine shielding needs and formulate guidelines for altering or ending an individual's space activities.
"We think our code also can be helpful for doctors and patients planning bone marrow transplants," Jones says. "It can predict the risk of total body cancer and leukemia from various sources, levels, and dose rates of radiation from medical, occupational, and environmental exposures."
Recently, human data on leukemia and lymphoma cells have been used to model response kinetics of those malignant cells, as compared with normal stem and stromal cells. For the malignant cells, cell proliferation rates or cancer doubling time data can be entered for the individual patient.
Before receiving a bone marrow transplant, a patient must be given radiation and cytotoxic chemicals to suppress the normal immune response. Otherwise, the body would reject the transplanted marrow through a strong reaction known as graft-versus-host disease. Also, because leukemia originates in bone marrow cells, it may be necessary to kill the existing marrow and then transplant healthy marrow from a matched donor.
For aplastic anemia, a marrow transplant is needed to replace the "lazy" cells that fail to supply the needed blood cells. The lazy cells are destroyed by radiation and replaced by cells that proliferate more readily.
In radiation therapy for leukemia or aplastic anemia, for example, a patient may undergo several treatments spread out over a few days. A patient may receive a total dose of 1000 to 1200 rads of gamma radiation at less than 10 rads per minute. The radiation source may be cobalt-60, cesium-137, or more modern procedures such as proton therapy.
"The user-friendly code helps you decide how many radiation treatments are needed, how to minimize risk to the patient, and how long the recovery period will last for a particular course of treatment," Jones says. "It provides some information on patient responses to different therapeutic aids such as antibiotics and blood transfusions. MarCell models the rate of cell loss and recovery for marrow stromal cells and stem cells exposed to 12 types of radiation, including Xrays, gamma rays, neutrons, beta radiation, tritium, and mixed fields of neutron-gamma radiations.
In response to menu selections, the user simply enters dose, dose rate, number of exposures, and time between each exposure. A graph appears that reveals how many bone marrow cells die and how fast during treatment. An option permits the same graph to show how different cell lineages repopulate during, between, and after individual radiation treatments; some of the injured marrow cells may proliferate and replace some of the normal bone marrow stem cells. MarCell calculates the increased risk of cancer of the blood and lymph glands that results from marrow transfusion or immunosuppression.
Since the 1970s, Jones had long been interested in modeling bone marrow kinetics, but the standard approach did not work adequately. Then he and Morris decided to work backwards by starting with data on survival and death rates of animals exposed to known doses of radiation that were delivered over the course of hours, days, and months.
"We used data on more than 18,000 test animals including mice, rats, dogs, sheep, swine, and burros," Jones says. "These experiments were conducted in the 1950s and '60s by the U.S. Atomic Energy Commission and British Medical Research Council. To estimate risk of cancer and leukemia, we used risk coefficients based on the response to radiation of radiologists, atomic bomb survivors, and victims of radiation accidents."
Early bone marrow codes that attempted to model cell losses and recovery and predict risk of death failed for marrow cells irradiated over a long time. "The approach," Jones says, "was to model long-term repopulating stem cells as the most radiosensitive and as the cells most critical to blood formation. The reason is that stem cells are the parents of specialized marrow cells, such as platelets (which stop bleeding), red blood cells (which carry oxygen), white blood cells (which fight disease), and malignant cells that cause leukemia and lymphomas.
"We tried a different approach," Jones says. "Morris used special techniques to estimate the numerical constants in our equations. The equations are based on simple models that show how marrow cells can be grouped into normal, injured, and killed cells and how new cells must be supplied to replace the killed cells."
Morris used a powerful workstation computer to run the math backwards from all the dose responses to optimize the numerical coefficients in the model. He related known radiation exposures to animal survival and death rates and to cell survival and death rates.
Hasan made the code more user friendly for the medical community. He also modeled malignant cell kinetics.
The ORNL scientists were the first to use modern statistical techniques and sophisticated computing power to address the thorny problem of the death rate and growth rate of irradiated bone marrow cells.
"Our big surprise was that the results of Morris's calculations did not describe stem cells as the cells most critical to the survival or death of an animal. I suggested that stroma cells might be more critical than believed, and the results on both animal and cell survival rates matched these theoretical cells of the mathematical model."
Jones said this finding has been confirmed by more recent experimental work. The evidence shows that stroma cells (stroma means "bed" in Greek) do much more than simply serve as a supportive layer to which stem cells must attach before they can proliferate. "These fatty yellow cells," Jones notes, "produce growth factors, or cytokines, that tell stem cells when and how fast to divide and how to differentiate into platelets and red, white, and other kinds of blood and lymphatic cells.
The ORNL scientists' code calculations led to 10 papers in a number of journals including the International Journal on Radiation Oncology. A drawing by ORNL graphic artist Allison Baldwin concerning their work graces the cover of the June 1993 issue of Experimental Hematology.
Ron Goans and other scientists at Oak Ridge Associated Universities (ORAU) are interested in using the same methods to model losses and recovery of irradiated lymphocyte cells. Such information can serve as an early biological indicator of the response of victims of radiation accidents as well as the responses of patients to a series of therapeutic treatments.
Robert Ricks of ORAU's Radiation Emergency Assistance Center/Training Site (REAC/TS) has collected a wealth of data from the former Soviet Union concerning human response to the Chernobyl reactor accident. MarCell is expected to be useful in analyzing and standardizing these data and other information on accidental and therapeutic exposures in the REAC/TS data base.
A Diamond Rotor for a Nickel Micromotor
You can easily cut out a cookie from dough with a cookie cutter. With a little more effort, you can cut out a puzzle piece from plywood with a jigsaw. But how do you cut out a rotor for a micromotor from a diamond--especially if it's as small as the period at the end of a sentence?
Scientists at ORNL have worked out a method for making diminutive diamond devices. They are now collaborating with researchers in Research Triangle Park, North Carolina, to develop a nickel micromotor with a diamond rotor on a silicon substrate.
ORNL scientists have worked out a method for making a diamond rotor for a nickel micromotor on a silicon substrate. The diamond rotor would be about the size of the period at the end of this sentence. In this micrograph, a miniature, 13-micrometer-thick, single-crystal diamond star made at ORNL sits on top of a period on a piece of paper.
Perfection of the technique for producing diamond-based microelectromechanical systems (MEMS) could lead to diamond microsensors. These could be used where other materials could not--in corrosive liquids, in the bloodstream, and in high-radiation environments such as outer space.
The ORNL developers are John Hunn, formerly a postdoctoral scientist in ORNL's Solid State Division and now a postdoctoral scientist in the Metals and Ceramics Division; and Steve Withrow and C. W. (Woody) White, both of the Solid State Division.
"The micromotor has one moving parta gear-shaped rotor which turns on a fixed axle," Hunn says. "In the current state-of-the-art micromotor, the rotor hub and axle erode quickly because of mechanical abrasion. We propose replacing these critical components with diamond."
Diamond is preferable to other materials because of its wear resistance and low friction. It is also mechanically stronger and more resistant to attack by corrosive chemicals.
To initiate manufacture of the diamond device, Hunn and his colleagues used ion implantation on a diamond sample at the Solid State Division. Using the 1.7- million-volt tandem accelerator at ORNL's Surface Modification and Characterization Research Center, Hunn bombarded a diamond sample with carbon ions at 4 million electron volts. The ions penetrated to 1micron below the surface before inflicting damage, creating a graphite layer inside the crystal.
The ion-implanted sample was then sent to Kobe Electronic Materials Center in Research Triangle Park. The center used microwave chemical vapor deposition to lay down a 30-micron diamond film. As a result, 31 microns of diamond laid on top of the sacrificial graphite layer.
To cut out a diamond rotor from the film sample, Hunn plans to use an ultraviolet laser at Potomac Photonics, Maryland, or in ORNL's Chemical and Analytical Sciences.
The laser beam will cut a trench through the film down to the graphite layer, outlining the shape of the rotor, which will be 31 microns thick and 100 to 400 microns in diameter, or about the size of the period at the end of this sentence. The laser is stationary, but a computer-controlled table moves the sample under the vertical laser beam to cut the trench in the desired shape.
"You step the laser around the pattern some 20 to 30 times," Hunn says. "As the table moves, each pulse from the focused laser microbeam removes a spot about 10 microns in diameter and 1 micron deep. The spots overlay, forming a continuous trench."
The final step is to heat the sample in a Solid State Division furnace under flowing oxygen. The oxygen burns away the sacrificial graphite layer, etching under the laser-patterned diamond film. Graphite burns at a lower temperature than diamond because the bonds between the carbon atoms are weaker. When all the graphite has been removed, the diamond rotor is freed from the surrounding crystal and can be lifted out.
"I use the static electricity on a bit of plastic to manipulate the tiny diamond shape," Hunn says. "The 0.25-millimeter- thick, 3-millimeter-square diamond substrate can be reused to reduce production cost."
The nickel micromotors, whose rotors will be replaced with diamond, have been manufactured on 1 square centimeter of "real estate" purchased on a 6-inch silicon wafer. This multi-user MEMS process (MUMPs), made available through the Microelectronics Center of North Carolina, allows researchers to obtain individualized MEMS processing at less than 1% of the cost for an entire wafer.
This research was supported by DOE's Division of Material Sciences, Basic Energy Sciences, and in part by an appointment to the Oak Ridge National Laboratory Postdoctoral Research Program administered by the Oak Ridge Institute for Science and Education.
"The hybrid diamond-nickel micromotor is an interesting example of this fabrication technique," Hunn says. "However, in order to find a market for diamond MEMS, one must invent a usable product that cannot be made from cheaper and easier-to-process materials. In the future, we hope to apply our method to produce unique diamond-based microsensors that would have real technological applications."
ORNL Inverter: Help for Electric Vehicles?
Electric cars and buses, adjustable-speed motors, heat pumps, fans, and compressors may benefit from a new electric power inverter developed by ORNL researchers. Inverters, used with many electric devices and motors, convert available power to the type needed--such as direct current to alternating current.
The Resonant Snubber Inverter (RSI), invented by engineers in the Digital and Power Electronics Group in ORNL's Engineering Technology Division, improves the efficiency and reliability of electric devices. In addition, it is smaller and lighter than other inverters, it greatly reduces electromagnetic interference (EMI), and it potentially lowers the cost of electric power inverters.
The RSI is about 80% efficient at low speeds and 98% efficient at high speeds. Conventional inverters lose more energy; they are about 60 to 70% percent efficient at low speeds and 94% efficient at high speeds. Inverter efficiency gains of that magnitude--especially at the lower speeds typical of its use in a car--could help electric vehicles become a viable option, researchers say.
To do their job, inverters employ a series of switches and electronic components. A conventional electric power inverter consists of six semiconductor power switches that turn on or off about 20,000 times per second in different combinations to provide the desired output. The inverter switch turns on and off at full voltage and current, generating a huge, wasteful power spike. This type of "hard switching" is an effective way of obtaining a specific current; however, this circuit design causes many problems.
An ORNL-developed electric power inverter offers more efficiency and reliability and less electromagnetic interference than conventional power inverters. Bob Young (left) and Jason Lai are two of the inventors of the Resonant Snubber Inverter (RSI), which is smaller and lighter than conventional converters, like the 70-kw unit in the foreground. Young and Lai are holding a 100-kw RSI.
The conventional inverter is noisy, big, heavy, unreliable, and expensive, says ORNL researcher Jason Lai, who works with ORNL co-inventors Bob Young, Matt Scudiere, John McKeever, George Ott, and Cliff White and University of Tennessee co-inventors Daoshen Chen and FangZ. Peng. All are members of the Digital and Power Electronics Group, which is led by Don Adams.
In addition to EMI caused by hard switching, Lai says, conventional inverters put considerable stress on silicon devices and other parts within the inverter, causing reliability problems.
Although the conventional inverter uses six switches to achieve a desired output, the RSI adds three small auxiliary switches that temporarily--and very briefly--divert current, then route it back to one of the six main switches. This diversion, which lasts only a couple of microseconds, produces a zero voltage across the switch, helping reduce damaging power spikes. The RSI's "soft switching" increases efficiency from 4 to 15 percentage points over that of a conventional inverter. The efficiency gain depends on the speed of the motor connected to the inverter. Greatest efficiency gains occur when the motor is run at less than full speed, typical of an inverter's function in an electric vehicle.
An even more important advantage of the RSI is that it virtually eliminates EMI. Tests using an oscilloscope show EMI is greatly reduced compared with conventional hard-switching inverters and previously developed soft-switching inverters. EMI can interfere with the operations of appliances, telephones, electronic instruments, television reception, and other electronic equipment, such as computer-controlled ignition in automobiles.
Another benefit of the RSI is that it reduces voltage and current stress to inverter components. This feature improves the reliability and allows the use of lower-cost power devices. Because the RSI smoothly, or softly, changes the voltage and current during device switching, it can also help reduce the possibility of motor failure caused by insulation breakdown and bearing overheating. Soft switching also reduces the inverter's operating temperature, lessening the need for large, heavy heat sinks--devices to dissipate heat. Instead, the RSI can use smaller, lighter, and less expensive heat sinks to absorb excess heat before it degrades electronic equipment and causes failures.
The latest 100-kilowatt, three-phase RSI built by ORNL researchers is compact, measuring 9 by 12 by 6 inches and weighing 20 pounds. Hard-switching inverters from several years ago were bulky and weighed several hundred pounds. Even newer state-of-the-art inverters weigh two to three times as much as the RSI.
In addition to its use in electric vehicles, another likely application for the RSI is in heat pumps. According to McKeever, using the RSI and fans that run continuously, comfort levels and efficiency levels could be increased.
The research was supported by DOE's Laboratory Directed Research and Development fund.
In a different project led by Adams, an RSI is being incorporated into an advanced air conditioner to be installed on electric buses, including one in Chattanooga in 1997. The unit is the product of ORNL's work in advanced electric motor technology and work in developing a new air conditioner technology by a cooperative research and development agreement partner. Installation of the unit is expected to eliminate the need for an auxiliary power unit required for the bus's air conditioner. These auxiliary units are currently powered by propane, which results in emissions, noise, added weight, and increased cost. The RSI should make people more willing to leave the driving to the operator of the electric bus.
Already the costs of developing the mercury analysis technique have been recovered through costs avoided. In this case, ORNL's mercury analyzer was used to produce detailed maps of mercury-contaminated soils along the floodplain of Lower East Fork Poplar Creek in Oak Ridge. Now, the ORNL technique will be used to verify that contaminated soil has been removed and replaced with clean soil.
The federal government has decided to excavate soil from areas in the floodplain having mercury concentrations above 400 parts per millionthe selected remedial goal option. According to this plan, about 27,000 cubic yards of soil will be removed from two sites along the creek and taken to a permitted landfill at the Oak Ridge Y-12 Plant for disposal.
Mercury concentrations in floodplain soil must be measured to ensure that all soil with mercury concentrations above 400 parts per million has been identified and to verify that remediation has been effective in meeting the remedial goal.
To refine the extent of mercury contamination on the floodplain, workers collected samples and measured the concentration of mercury in each sample. By determining where surface soil contains 400 or more parts per million of mercury, they then estimated the amount and location of soil that must be excavated. Preparations for disposing of the contaminated soil in the landfill were then completed.
"We first prepared and processed up to 150 samples in about three days," says Ralph Turner, developer of the mercury analysis technique and a researcher in ORNL's Environmental Sciences Division. "In this case, sample preparation involved drying, crushing, and digesting the soil samples, but samples of naturally moist soil can be digested without first drying and crushing.
"Technicians dried, pulverized, and digested the soil samples. It then takes about 3 minutes to do an analysis on a soil sample and 2 minutes to analyze a water sample using our technique. Along with workers from Jacobs Engineering, we processed nearly 1100 soil samples from the East Fork Poplar Creek floodplain and refined the area that requires excavation."
Labor and material for one nationally recognized laboratory analysis technique cost about $90 a sample, but for ORNL's mercury analysis technique, they cost only $35 a sample. Based on these costs, Turner says, if the mercury analyzer were used to direct and confirm cleanup, the government would save several million dollars in floodplain remediation costs. The savings come from reduction in the cost of analysis and in the time that soil excavators and handlers are idle.
Ralph Turner draws an air sample from the headspace of a sample bottle containing mercury-contaminated soil.
Turner uses a commercially available, battery-powered mercury analyzer to measure the mercury vapor in the headspace (air at top) of the plastic bottle. Results of this measurement are used to calculate the concentration of mercury in the soil sample.
DOE and the Environmental Protection Agency's (EPA's) Region IV have approved use of this technique for analyzing mercury levels in the East Fork Poplar Creek floodplain before remediation begins. "Our technique," Turner says, "is now being used to confirm that cleanup of the East Fork Poplar Creek floodplain has achieved allowable levels. EPA has approved use of the technique for this activity provided that a few samples also be analyzed by a conventional method."
In the ORNL technique, the mercury-containing soil samples are chemically treated before analysis to transform the mercury into an easily detectable form. To liberate the mercury from the soil particles, the soil sample is digested using aqua regiaa mixture of hydrochloric and nitric acids often used for dissolving platinum and gold.
For both water and soil samples, stannous chloride is added. The tin in stannous chloride, which has been used for many years in conventional mercury analyses, supplies the electrons to reduce oxidized mercury to the metallic element, which tends to escape from water to the air as a vapor.
The 1-liter plastic bottle containing the sample is then shaken by hand, causing about one-third of the elemental mercury in the soil solution to leave it as a vapor and to mix with air in the headspacethe space between the top of the solution and the bottle cap. "This partitioning of the volatile elemental mercury between a liquid and a gas according to Henry's Law is important," Turner says,"because the analyzer detects and measures only mercury vapor in air or some other gas, not mercury in water."
The amount of mercury vapor in the bottle's headspace is measured using a commercially available portable analyzer. The analyzer takes advantage of mercury's affinity for gold; the electrical conductivity of a gold foil in the analyzer is affected by the amount of mercury attracted to it, so the measured change in conductivity indicates the mercury concentration.
The battery-powered mercury analyzer, which can be rented, is about the size of a big loaf of bread. Other materials used with it can easily be packed in a shoebox, except for the bottle in which the headspace measurements are made.
The development of the mercury analysis technique was supported by the Environmental Restoration Division of DOE's Oak Ridge Operations.
Measuring Species of Water-soluble Mercury Gas in Air
Mercury is a heavy liquid metal, but it can float through the air as a gas. Researchers at ORNL have identified an important species of gaseous mercury in air that is highly soluble in water. This finding may help explain the concentration of mercury in precipitation and, as a result, in fish in lakes far from industrial discharges of mercury.
The discovery of a form of mercury in water was made by Steve Lindberg, a geochemist in ORNL's Environmental Sciences Division, and Wilmer J. Stratton, professor of chemistry at Earlham College in Richmond, Indiana, who conducted research at ORNL. Professor Stratton was visiting ORNL as the faculty director of the Oak Ridge Science Semester for students from the Great Lakes College Association.
"During dry weather," Lindberg notes, "this form of mercury would also be rapidly deposited to vegetation where it may be washed into soils and nearby streams."
They developed a novel technique using a type of cloud chamber, called a "high-flow refluxing mist chamber," to identify and measure reactive gaseous mercury, called Hg(II), in air. This type of mercury differs from elemental mercury, or Hg(0), which also exists as a vapor in air but is only sparingly soluble in water, in that the Hg(II) atom is missing two electrons. The actual compound in which Hg(II) resides is unknown, but Lindberg says it is most likely mercuric chloride.
Using a high-flow refluxing mist chamber such as this one at the Laboratory, ORNL researchers were the first to identify and measure reactive gaseous mercury in air.
Lindberg and Stratton's measurements indicate that 2 to 4% of total gaseous mercury in air is the highly water-soluble species and about 97% is elemental mercury vapor. "Because this low fraction is highly soluble in water," Lindberg says, "it is important to explaining the observed concentration of mercury in rain and snow, as well as the high rates of mercury dry deposition measured in some areas. Rain and dry deposition are important mechanisms for depositing atmospheric mercury on the earth's surface, helping to account for the high levels of mercury in the tissue of fish in lakes remote from man-made mercury sources."
The results should be of current interest because copies of a draft of an Environmental Protection Agency (EPA) report are now in the hands of members of the U.S. Congress. The EPA's Mercury Study Report, which is required by the Clean Air Act Amendments, noted the lack of data on airborne water-soluble mercury. EPA models indicate that a slight difference in the amount of mercury dissolved in airborne vapor could have a large effect on the amount of mercury deposited on the earth's surface.
Sources of Hg(II) emitted directly to the air are the burning of municipal and medical waste in incinerators and coal combustion. Although some coal-fired power plants have scrubbers to remove pollutants from flue gases, Lindberg says their removal efficiency for mercury varies from about 30 to 70%.
Lindberg first learned about the mist chamber when he met one of its developers in 1985 during a global climate change field study in a Brazilian rain forest. He was Bob Talbot of the University of New Hampshire.
Then, in 1990, while participating on a panel to review Sweden's mercury program, Lindberg first heard speculation about whether water-soluble mercury vapor may exist in the atmosphere because it had been identified in laboratory studies. He saw an opportunity to determine whether this species exists in outside air by a novel application of the mist chamber.
In 1993 Lindberg contacted Talbot and persuaded him to send ORNL a mist chamber. Lindberg and Stratton then found they could trap water-soluble Hg(II) in an aerosol mist in the chamber. With support from the Electric Power Research Institute, they conducted a number of tests at Walker Branch Watershed near ORNL and near the Earlham College campus to verify that this species did not come from other sources such as oxidation of Hg(0) by ozone in the chamber.
"Swedish scientists had identified a water-soluble species of mercury in laboratory flue gases from coal combustion," Lindberg says. "But the director of the Swedish mercury program didn't think this species existed in outside air. It occurred to me that, if water-soluble mercury is in the air, it will dominate atmospheric deposition of mercury."
The finding is significant, Lindberg says, because the accuracy of predictions of computer models on atmospheric mercury transport and deposition depends largely on assumptions about the fraction of highly water-soluble mercury present.
"About 30 to 80% of mercury emitted to the air by combustion processes is in water-soluble form based on recent studies by Frontier Geosciences in Seattle, which is now collaborating with ORNL to test the mist chamber method," Lindberg says. "This Hg(II) is either deposited quickly or rapidly reduced to elemental mercury by sulfur dioxide dissolved in water. Elemental mercury is also dissolved in water, but Hg(II) is much more soluble in water and deposits much more rapidly."
Lindberg says the data suggest a link between rain and the atmospheric deposition of mercury in lakes far from industrial sources of mercury. The mercury is then transformed into methylmercury by bacteria. This compound, which is toxic to humans if consumed in even tiny amounts, is readily taken up by fish in these remote lakes.
Some geologists, however, argue that rock weathering, rather than atmospheric deposition, could be the chief source of mercury to these lakes. It is still not resolved whether mercury in waterways comes mainly from natural sources or from human activities such as waste incineration and coal combustion for electrical power production.
In the ORNL technique, a vacuum pump draws air through the mist chamber from an inlet at the bottom. A mist is sprayed into the chamber. As the air passes through to the top, the highly soluble mercury in the air is dissolved in the mist. In the laboratory, the mercury is then reduced to elemental mercury with tin chloride (which adds the two missing electrons), stripped from the water droplets by purging with nitrogen onto a gold trap (mercury is attracted to gold). After the gold surface is heated to remove mercury, the concentration of mercury is measured by atomic fluorescence spectroscopy.
The ORNL discovery of highly water-soluble mercury in the background atmosphere was made in 1993, reported in 1994 at a scientific meeting, and published in 1995.
Lindberg, who is co-chairman of the conference entitled "Mercury as a Global Pollutant" planned for 1996 in Hamburg, Germany, and the developer of a U.S. network for monitoring the movement of toxic substances in the air, says: "Mercury is a very mobile metal because it exists so often in gaseous forms. It behaves less like a trace metal and more like some pesticides, PCBs, and other persistent organic pollutants. Because its various forms are volatile, it plays hopscotch--depositing on land and water from air, staying there awhile, and then reentering the air as a gas. In this way, mercury can be rapidly and widely distributed throughout the global atmosphere."
Green Plants Emit Mercury, ORNL Discovers
In the 1970s, researchers at ORNL discovered that green plants can take up mercury from the soil and from the air. Now, ORNL researchers have scientific proof that plants can also emit mercury to the air.
Paul Hanson of ORNL's Environmental Sciences Division (ESD) discovered that green plants can give off mercury during his study of mercury uptake in plants from air and soil. The study was conducted for the Electric Power Research Institute, the research arm of the electric utility industry. The goal of the study designed by Steve Lindberg of ESD and Hanson was to determine if the landscape is primarily a source of or sink for mercury--that is, whether it mainly emits mercury to the air or stores mercury deposited on it from air.
The project used two independent methods to measure the mercury fluxes, one in the laboratory and one in the field. To conduct the study, Lindberg and Hanson used a technique pioneered by Lindberg and Ki Kim, a postdoctoral scientist in his laboratory. This high-precision sampling technique measures the exchange of mercury between air and land.
"In the field, we measure mercury concentrations at various heights above the ground to get a concentration profile," Lindberg says. "If the mercury concentrations are higher close to the surface than farther above it, then the surface is a mercury source. If the reverse is true, more mercury is being deposited than emitted. These fluxes are mapped over various areas, and the data are used to answer such questions as whether the surface is primarily a source of or sink for mercury."
In his laboratory experiments, Hanson studied maple, oak, and spruce saplings in a chamber into which mercury-free air was introduced. Hanson developed the mercury chamber method using equipment developed for ground-level ozone studies in the 1980s at ESD. The soil the saplings were planted in was isolated from the chamber. Hanson sampled the air for mercury vapor. "To my surprise," he says, "mercury was coming from the plants!"
Further experiments showed that the plants take mercury from the air when the air's mercury level is above about 20 nanograms per cubic meter. When the mercury level in air is only 2 nanograms per cubic meter, the plants emit mercury. These levels of mercury are common near pollution sources and at background sites, respectively.
"Our theory," Hanson says, "is that elemental mercury in soil gas is pulled into the plant when the plant's mercury level is low. The plant tries to achieve equilibrium with respect to mercury levels in the air. When the plant's mercury level rises and the air mercury level decreases, at some point the plant releases some of its mercury by transpiration--the process of giving off vapor containing waste products through the stomata of plant tissues."
While Hanson was performing his studies, Lindberg, Kim, and Jim Owens of ESD climbed the 44-meter meteorological tower at Walker Branch Watershed near ORNL to measure gradients in mercury concentrations over the forest.
"To my surprise," Lindberg says, "the tower data also indicated significant emission of mercury from the oak, hickory, and maple trees below."
Lindberg also studied trees at a Christmas tree farm in Wartburg, Tennessee. These trees are too far away from Oak Ridge to be exposed to any mercury emissions there. He measured mercury concentrations in air, soils, water, and vegetation at the Wartburg farm.
"We found that mercury deposits from the air to the trees when they are wet," he says. "But we also observed that the trees are a strong source of mercury to the air when they are dry, supporting the data from the ORNL studies."
In ORNL studies in the 1970s of mercury-rich soils near a large mercury mine in Spain, Lindberg, Danny Jackson, and John Huckabee found that these soils emit mercury vapors at a rate that depends on temperature and vegetation cover. "We found that crops grown on these soils accumulate mercury in two ways," Lindberg says. "The roots take up mercury from the soil, and the leaves absorb mercury vapor from the air. These pathways may provide important exposure mechanisms if humans consume either leafy or root-type vegetables grown on these soils."
ORNL's pioneering method to measure fluxes of mercury over the landscape has also resulted in another discovery. Anthony Carpi, a graduate student from Cornell University working in Lindberg's laboratory, found large gaseous fluxes of elemental mercury from sewage sludge applied to forest and farm soils.
"Sewage sludge used to fertilize soil is a previously unmeasured source of elemental mercury to the air," Lindberg says. "Carpi found that more mercury comes off agricultural soil than forest soil to which the same amount of sewage sludge is applied. The emission rates measured over sludge-amended soils exceeded those measured over forest soils at Walker Branch Watershed by a factor of 100 or more. He also discovered and measured emissions of methylmercury from sewage sludge."
This is significant because sewage sludge applied to soil is the only known terrestrial source of methylmercury to the atmosphere. Methylmercury is a highly toxic compound formed in the environment that can be a health hazard for humans when taken in as food.
In the ORNL studies, mercury vapor is collected in quartz glass tubes that contain acid-washed sand coated with metallic gold. When air is drawn through tubes at a prescribed flow rate, mercury vapor adheres to the gold. By heating the gold, researchers can measure the amount of mercury vapor released at a precision level of a few trillionths of a gram.
Process Removes Mercury from Mixed Waste
Some mixed waste on DOE's Oak Ridge Reservation contains mercury as well as other hazardous metals, toxic chemicals, and low-level radioactive substances. Can this mercury be removed to simplify and lower the cost of treating and disposing of the remaining waste?
Researchers at ORNL have shown that a commercially developed mercury-removal process they modified can extract mercury from mixed waste. In laboratory studies, the process has been shown to remove 99.6% of the mercury present in actual mixed waste and from synthetic soils, surrogate sediments, and crushed glass from fluorescent light bulbs.
In laboratory studies, researchers with the General Electric (GE) Company have shown that GE's patented potassium iodideiodine (KI/I2) leaching process can remove mercury from mercury-contaminated waste at elevated temperatures. Two ORNL researchers have shown that the process can work effectively on mixed waste at room temperature, and they have made other changes so that the leaching solution can be prepared more rapidly for recycling.
The researchers are Dianne Gates of the Environmental Engineering Group in ORNL's Environmental Sciences Division, and Thomas Klasson of the Remediation Technology Group in the Chemical Technology Division. In their tests, they use familiar objects such as glass flasks and steel wool.
"The mercury removal step is especially needed as a pretreatment for mixed wastes in which mercury is the chief nonradioactive material," Klasson says. "For example, mixed waste can contain mercury and low-level radioactive metals such as uranium, technetium, cesium, and strontium.
"It is desirable to remove mercury because it is volatile--it turns from a liquid metal into a gas," he continues. "If it's removed, the remaining radioactive waste can be treated with thermal processes that melt the components into a ceramic or turn them into glass.
"The design of the thermal treatment device would be simpler and less costly if volatile mercury is removed first. We would then have fewer gases to deal with during the heating process."
At nuclear facilities, there is also interest in removing mercury from burned-out fluorescent lights that are being stored as "administratively radioactive" waste because of the presence of radiation in the lights' original location. If mercury can be removed from this waste, which is stored as crushed glass, then the glass can be reused rather than stored as a waste. For example, it could be a starting material for vitrification, a method for electrically heating radioactive waste to form a glass that traps the radioactive material.
"For actual mixed waste, we will not know what form the mercury is in," Gates says. "That's why we like this process. It attacks and isolates mercury in the elemental form and in any compound whether it be mercuric chloride or mercuric sulfide or mercuric oxide."
The iodine atoms in potassium iodide surround mercury atoms in any chemical form. Because of the strong attraction between both types of atoms, charged molecules called mercury iodide complexes form, trapping the mercury atoms in the leaching solution. Iodine in the leaching solution is used to oxidize elemental mercury (so it won't escape into the air as a vapor) and to attack mercuric sulfide, freeing the mercury ions from the sulfur ions.
In the ORNL experiments, 10 to 100 grams of mixed waste are poured into a 200-milliliter flask. The flask is placed in an environmental shaking chamber, which rotates and shakes for four hours to mix the solid waste with the leaching solution. Such mechanical mixing is required to maximize the contact between liquid and solid to separate out as much mercury as possible.
The next series of steps aims at removing the mercury from the leaching solution and replacing the solution's lost iodine, thus making it reusable for treating mercury-contaminated mixed waste. The solution must be recharged with iodine because some iodine is used up in oxidation. The mercury-bearing leaching solution is run through a column containing steel wool. The mercury forms an amalgam with iron in the steel wool.
Dianne Gates holds a flask of surrogate sediment to which mercury has been added. Thomas Klasson examines a flask containing a mixture of the mercury-contaminated sediment and the potassium iodideiodine leaching solution. This flask will be placed in the environmental shaking chamber, which mechanically mixes the liquid and solid to separate out as much mercury as possible.
"GE mixes the leaching solution with iron filings at elevated temperatures to remove mercury from the leaching solution," Gates says, "but we found that steel wool works better. We can separate mercury from the leaching solution using steel wool at room temperature in just one hour. Because the steel wool can be packed in a column, we have eliminated one separation step from the GE process. Then we use lime to remove any metals remaining in the solution and to convert all remaining iodine to iodide, in a procedure that takes 30 minutes. Finally, in approximately 1 hour we regenerate the required amount of iodine in the leaching solution by adding an acid and hydrogen peroxide. Altogether, the mixing, leaching, mercury separation, and regeneration processes take less than 8 hours." All in a day's work.
The laboratory-scale research was supported by DOE's Environmental Management Program, Office of Technology Development, Mixed Waste Integrated Programs. The ORNL researchers are seeking funding to test the pretreatment technology at an engineering scale inside a building.
ORNL Method Removes Mercury from Soil
The Oak Ridge Reservation has soils that are contaminated with mercury because of human activities. That's the bad news. The good news is that the reservation also is blessed with microbial organisms that could be the key to releasing its soil mercury. Here's the story.
About five years ago, Richard Tyndall, then of ORNL's Health Sciences Research Division, discovered a consortium of bacteria that resides within amoebae in soils on the Oak Ridge Reservation. These one-celled organisms apparently serve as a protective niche for the bacteria. Arpad Vass of the same division isolated bacteria from the amoebae and found that the bacteria produce a powerful biodispersant that breaks up oil. It was thought that these isolates could be used to break up slicks from the 1989 Exxon Valdez oil spill into Prince William Sound that was environmentally damaging to Alaskan birds and fish. However, the idea was never tried.
"As we continued to experiment with this isolate," Vass says, "we found that it can break clumps of soil into fine, dustlike particles, just as a detergent separates grease particles. After more experimentation, we found that it could emulsify mercury--something that was unheard of."
Vass and Tyndall discovered that the bacterial isolate could produce a suspension of tiny globules of mercury in a liquid consisting of water and soil fines. However, as in an emulsion of oil in vinegar, the globules of mercury will not mix with the soil fines in the liquid. Says Vass, "We believe that the biodispersant overcomes the attractive forces between the soil particles and the mercury, thus allowing the mercury to separate from the soil."
Tyndall and Vass then contacted their supervisor Clay Easterly to let him know what they were trying to do. "I told him I was interested in finding a practical method to remove mercury from soil, because I had heard about the mercury contamination of soil around the Oak Ridge Y-12 Plant," Vass says. "My idea was that, since the biodispersant not only breaks up soil fines but also emulsifies mercury into thousands of small beads, we could use electroplating to remove mercury from the soil. This scheme didn't work, so we called Clay and he suggested that we use copper."
Easterly recalled to them his days in high school chemistry class when students used mercury to shine copper pennies, giving them the luster of dimes. "People used to coat nearly worthless copper coins with mercury to make them look like valuable silver coins," Easterly told Vass.
"So we tried many different copper sources, but most had improper surface conditions to amalgamate efficiently with the mercury," Vass says. "We went back to Clay and he said, 'Try pennies.' His idea worked. The mercury was attracted to the copper. Then I suggested using a magnet to remove the two metals. Clay came up with the idea of penny surrogates--BBs with iron cores and a copper coating."
The new ORNL biodispersant-based amalgam process uses an intra-amoebic bacterial biodispersant to break up the soil in a rotating cylinder and copper-coated iron pellets to attract the mercury. The mercury-covered pellets are extracted with a magnet. They are then placed in a vacuum oven where the heat separates the mercury from the copper.
"Our process can remove elemental mercury more efficiently than any other process we know about, and it will save time and money," says Easterly. "It will also eliminate the costly prospect of permanently storing thousands of drums containing mercury-contaminated soil in controlled hazardous waste sites. We are moving away from a system that stores mercury-contaminated soil to one that cleans it up. This new method allows us to remove a heavy metal that can pose a health risk in certain chemical forms and to return the soil to the land to be used again."
"The key to this process is the biodispersant, which is naturally occurring, nontoxic, and biodegradable," Vass says.
The problem with the process, Easterly says, is that the soft copper on the BBs rubs off. He hopes that ORNL's Metals and Ceramics Division can develop a hard magnetic copper alloy and that the material can be formed with dimples like a golf ball to present more surface area for attracting mercury.
The ORNL biodispersant-enhanced amalgam process can be used to remove elemental mercury from soil, such as that at the Y-12 Plant and other industrial sites. Because the process is an amalgamation, it would not remove mercury compounds like mercuric sulfide. Mercuric sulfide formed in the soil of the floodplain of East Fork Poplar Creek in Oak Ridge after the creek received releases of mercury from the Oak Ridge Y-12 Plant in the late 1950s and early 1960s. Mercuric sulfide, which is very insoluble, is not readily taken up by the body; elemental mercury is more hazardous to human health.
The biodispersant obtained from the bacterial isolate, which could be used in this process, will be particularly helpful in extracting mercury from contaminated areas on the Oak Ridge Reservation that have what Easterly calls "very tight soils." Tight soils make it more difficult to break the soil into smaller pieces for better contact between the copper and mercury.
No dangerous chemicals are used in the removal process. Both the cleansed soil and the copper pellets can be used again. The reclaimed mercury can be sold to industries that need the element for their manufacturing. Other advantages are the mobility of the equipment, short setup time, and minimal environmental hazard during the operation.
"Sale of the mercury could offset the cost of processing, particularly when you consider the costs of the alternative of storage or disposal of the contaminated soils as hazardous waste," Easterly says. The cost of the low-technology equipment is anticipated to be less than storage of mercury-contaminated soil or any other methods involving chemistry or high energy use (such as incinerating the soil).
The new mercury-removal process helps to fulfill an ORNL mission of finding more efficient, less costly ways to clean the environment to comply with the law and protect human health. Fortunately, some home-grown technologies may help us solve some problems at home.
[ Next article | Search | Mail | Contents Review Home Page | ORNL Home Page ]