Progress, and Applications
of the Human Genome Project
Sponsored by the U.S. Department of Energy Human Genome Program
Human Genome News Archive Edition
In this issue...
Also available in pdf.
1997 Santa Fe Highlights
In the News
DOE, NIH Discuss Informatics Goals
(See more detailed, personal notes on the meeting by DOE staff member Daniel Drell.)
Since the beginning of the Human Genome Project, informatics has been widely regarded as one of the project's most important elements. The vast quantity and wide variety of generated information dictate the use of computational tools for data collection, management, storage, organization, access, and analyses.
On April 2-3, the DOE and NIH human genome programs convened a workshop in Herndon, Virginia, to identify informatics needs and goals for the next 5 years. Attending were 46 invited informatics and genomics experts and 17 agency staff from DOE, NIH National Human Genome Research Institute, NIH National Institute of General Medical Sciences, and National Science Foundation (NSF).
Both DOE and NHGRI support the philosophy that the needs of data users are foremost and must drive the goals of genome informatics. At the meeting, the wide-ranging viewpoints of large sequencing centers, smaller specialized groups, biotechnology industry users, researchers exploring comparative and functional genomics, and medical geneticists were presented (see medicine and genome data).
Not all uses for these data can be anticipated today, thus implying the need for building structural flexibility into current and planned databases that support the genome project. Additionally, because knowledge will grow over time, curating the data --correcting it and adding new functional and useful links (annotation)-- must be done on a continuous basis.
Meeting attendees identified priorities and made suggestions and policy recommendations on these and other issues.
Priorities and Issues
Standardization. Much current data are highly heterogeneous in format, organization, quality, and content. This is not surprising, given the wide diversity of genome-research investigators who are generating the data. An identified priority is to comprehensively capture raw, summary, or processed data in standard, well-structured formats using controlled vocabularies. Additionally, databases must be integrated and linked.
Intelligent consensus standards should be defined and implemented by academia, government, and industry working together. Today, industry standards are very distinct from the few that exist in the genome project. The Object Management Group, now composed largely of industry representatives, also should involve personnel from academia and government. Explicit object definitions and access methods are needed desperately. Component-oriented software standards would promote systems integration, interoperability, flexibility, and responsiveness to change (adaptability). A balance is needed, however, between maintaining standards and allowing change and flexibility.
Tools. Tools to speed up the data-finishing bottleneck in sequencing are critical; still other tools are needed for production, research, access, annotation, data capture, functional genomics, and data mining. A Web site that collects and annotates these tools would be very useful.
Availability of Underlying Data, Especially for Individual Genotypes. Given the expense of phenotyping, the ability to see ABI traces and check on the possible association with a particular single-nucleotide polymorphism would be valuable. ABI traces are not necessary for the reference sequence because questionable regions can be resequenced.
Annotation. Automated annotation analyses should use clearly defined standard operating procedures, consistent application, and sufficient documentation for a more detailed understanding of particular chromosome regions. Automated annotation is a way to generate intelligent hypotheses about sequence functions and must be regarded critically as overall annotation improves with time. For this reason, human participation in the annotation process is still vitally important for getting the most out of genomic information.
Quality Checks. Attendees suggested regular checks of database quality. Users are frustrated by incorrect data and the unwillingness or inability of database providers to correct these mistakes. Official editors who curate information could resolve errors and improve data quality. Successful quality assessment at sequence centers serves as a model.
Training and Environment Issues. NSF science and technology centers are models for needed genome informatics centers. Three to five such centers were proposed to facilitate interactions among various disciplines and the training of students.
The electronic form of the newsletter may be cited in the following
Last modified: Wednesday, October 29, 2003
Home * Contacts * Disclaimer
Document Use and Credits
Publications and webpages on this site were created by the U.S. Department of Energy Genome Program's Biological and Environmental Research Information System (BERIS). Permission to use these documents is not needed, but please credit the U.S. Department of Energy Genome Programs and provide the website http://genomics.energy.gov. All other materials were provided by third parties and not created by the U.S. Department of Energy. You must contact the person listed in the citation before using those documents.
Base URL: www.ornl.gov/hgmis
Site sponsored by the U.S. Department of Energy Office of Science, Office of Biological and Environmental Research, Human Genome Program