Crisis Management and Collaborative Computing:
By Kimberly Barnes, John Cobb, and Nenad Ivezic
In this simulation of collaborative computing for disaster management involving a chemical plume, John Cobb checks the information on the computer screen while Kimberly Barnes confers by phone with Ed Oliver (ORNLs associate director for Computing, Robotics, and Education), seen on upper right of big screen. Photograph by Tom Cerniglio.
ORNL researchers have been combining software engineering, computational science, and collaborative technologies to respond to many domains such as enterprise management, disaster management, education, banking and finance, and engineering design and manufacturing. Collaborative technology environments will provide rapid, on-demand interoperability of autonomous legacy applications and tools within new and evolving environments.
A scenario of the near future: In the middle of the night, an earthquake of magnitude 7.1 on the Richter scale violently shakes the ground and buildings near a large metropolitan area. A few people are killed, some property is damaged, water and gas pipes are broken, and power and phone lines are down. However, public safety is threatened most by events following the quake. Because of leaking gas from severed lines, fire breaks out in parts of the city center, and the conflagration grows because of the lack of water for firefighting. At the same time, a toxic plume is spreading from a badly damaged chemical plant. At least 350,000 people living within 5 miles of the plant worry that their homes and even their lives may be endangered by the plume.
The situation is critical. Crisis management efforts in the first few hours after the quake will determine the total extent of losses. The toll may be as high as 5,000 dead, 20,000 injured, 100,000 homeless, and $35 billion worth of damage. However, optimal crisis management strategies may be able to reduce it to as low as 15 dead, 250 injured, and $500 million worth of damage. Critical decisions must be made fast. Where will the dangerous plumes drift? Where is the fire danger greatest? Which neighborhoods should be evacuated? How will emergency crews be alerted? Where will fire and police personnel be deployed? How will risks to safety be communicated to the public at large?
Nenad Ivezic, developer of the Rapid Integration for Disaster Management collaborative information system, is shown in his office.
Fortunately for this stricken area, the Rapid Integration for Disaster Management (RAPID-DM) collaborative information system developed by Nenad Ivezic of ORNLs Computer Science and Mathematics Division (CSMD) is deployed as part of the citys emergency management preparedness. RAPID-DM, within minutes, identifies, activates, and allows a dozen logistic, transportation, plume transport prediction, and evacuation systems and resources to work together, or interoperate, to help scores of end users responsible for making critical decisions.
Remarkably, these complex systems have never previously been assembled in exactly the same configuration. Quick interoperation mechanisms for weather feeds, geographic information system (GIS) databases, transportation planning tools, and plume movement prediction systems allow decision makers to accurately predict the direction that a toxic plume will move and evacuate population from the areas expected to be affected. Quick coordination of rescue forces from the citys police and fire departments and from the National Guard is possible only because of the RAPID-DM capability, which enables communication among these teams and geographically dispersed decision makers.
Collaborative Decision-Making Environments
Researchers at ORNL are accepting the challenge of addressing problems of national importance such as disaster management. The ultimate goal, much like the scenario described, is to allow geographically dispersed decision makers to respond quickly in a collaborative decision-making environment. Disaster management is only one example of the applicability of current and proposed research that combines software engineering, computational science, and collaborative technologies.
To enable a collaborative decision-making environment, many complex issues must be investigated and plausible solutions identified. In a collaborative decision-making environment, unlike traditional approaches, information is extremely varied and in a multitude of formats. Software programs historically written to solve a single problem without the requirement to interoperate with other software programs must be adapted to work in concert with many other software programs. Diverse end users with very distinct educational backgrounds and use of professional jargon are required to work in an environment of common understanding. The information needed goes beyond data on the status of the infrastructure; it includes not only inventory, finance, and accounting data but also detailed, predictive, interactive what-if calculations that involve large-scale scientific computation, visualization, and steering to answer near-term questions of dramatic importance. For example: Will the flash flood affect this neighborhood? Will the plume cloud move over this neighborhood, that neighborhood, or away from all population centers? Where will the hurricane make landfall?
Collaborative computing is the paradigm of choice for solving large, complex scientific and managerial problems. The use of collaborative computing to establish a rapid, on-demand interoperability of autonomous legacy applications and tools within new and evolving environments requires researchers to go beyond current software integration approaches. New initiatives, such as DOE 2000 (see previous article in this issue), establish an aggressive research agenda to ...fundamentally change the way scientists work together and how they address the major challenges of scientific computation. To accomplish this change, DOE 2000 plans to develop and explore new computational tools and libraries that advance the concept of national collaboratories and Advanced Computational Testing and Simulations (ACTS). The vision of DOE 2000 is to accelerate the ability of the DOE to accomplish its mission through advanced computing and collaboration technologies. DOE 2000 ushers in a new era of scientific collaboration that transcends geographic, discipline, and organizational boundaries.
Collaborative Computing Advances at ORNL
In collaborative computing, two or more computer users work in concert across time and space by using interoperable software so they can simultaneously solve a problem. ORNL computer scientists have broken new ground in collaborative computing, spanning the scientific and administrative computing spectrum. Each step contributes to the foundation upon which real collaborative environments are built.
In scientific computing, DOE 2000 efforts at ORNL include the Materials Micro-Characterization Collaboratory (MCC), CUMULVS, and the electronic notebook projects. In the MCC project, the goal is to join several centers of excellence into a single on-line interactive collaboratory in which electron microscopes can be operated remotely. CUMULVS is designed to support remote computational steering of parallel applications and includes features such as interactive visualization and fault tolerance. In distributed, collaborative environments, projects like ORNLs electronic notebook provide a mechanism for scientists to record information such as the usage of instruments and experimental results.
Other ORNL information and computer science efforts also contribute to enabling collaborative computing. In the Computer Science and Mathematics Division (CSMD), researchers are developing the Collaborative Management Environment (CME) and other technologies such as Netsolve and data mining techniques. Researchers in the Center for Computational Sciences (CCS) are also contributing to advancements in data mining.
- The CME, led by Kimberly Barnes and T. E. Potok (both of CSMD), is a joint research project among Oak Ridge, Ames, Lawrence Berkeley, Los Alamos, and Fermi National Laboratories to establish a robust, scalable, and secure virtual management system for the DOE complex. The focus is on comprehensive integration of information within DOE and tools to enable management of inter-laboratory collaboration. The objectives are to create an enterprise model that unifies disparate legacy databases, adapts to changing environments, and rapidly synthesizes and presents information on thousands of research projects where the information is distributed in a multi-platform environment.
- NetSolve, according to Jack Dongarra, ORNLUniversity of Tennessee Distinguished Scientist, is a network-enabled solver that allows users access to computational resourcesboth hardware and softwaredistributed across a network. Its development was motivated by a need for easy-to-use, efficient mechanisms for remotely accessing computational resources. Ease of use is obtained via four different interfacesFortran, C, MatLab, and Java.
- Data mining, or knowledge data discovery, is a field in which a collection of sophisticated techniques is used to retrieve needed information from data. The techniques represented within this discipline allow new patterns and rules to be discovered that may not have been previously considered. Nancy Grady is leading the CSMD data mining effort.
CSMD researchers are combining existing techniques and new approaches to mine transportation, financial, scientific, and medical data as part of a project funded internally by ORNLs Laboratory Directed Research and Development Program. Working with banking industry data, researchers developed a prototype system to predict personal bankruptcy. This system is expected to be used by one of the largest banks in the nation. The approach combines decision-tree capabilities with neural networks (to allow for handling the categorical account records on an off-line basis, such as monthly) and with the time-series transaction data in real time.
CCS is collaborating with the Connecticut Healthcare Research and Education Foundation (CHREF) to perform data mining on Medicare-Medicaid patient encounter data collected by the Health Care Financing Administration (HCFA), a branch of the U. S. Department of Health and Human Services. Hundreds of gigabytes of anonymous inpatient, outpatient, home health care, hospice, and physician-supplier data have been stored in the High-Performance Storage System (HPSS), developed by IBM and DOE national laboratories including ORNL. These data will be analyzed to develop quality care indices based on HCFA data, which could be used to influence public health and health-care delivery policies.
Crisis Management Advances at ORNL
In the quest to address problems of national concern such as disaster management, researchers in ORNLs Computational Physics and Engineering Division (CPED) have laid new groundwork through development of large, single-purpose software applications such as the Joint Flow and Analysis System for Transportation (JFAST), the Police Command and Control System (PCCS), and Hazardous Assessment for Consequence Analysis (HASCAL). In addition, researchers in the Energy Division developed the Oak Ridge Emergency Management System (OREMS), and researchers in the Environmental Sciences Division developed the Ecological Model for Burning in the Yellowstone Region (EMBYR).
- JFAST, initiated by Brian Jones of CPED, is a multimodal transportation analysis model designed for the U.S. Transportation Command (USTRANSCOM) and the Joint Planning Community. JFAST is used to help the armed forces determine transportation requirements, perform course-of-action analysis, and project delivery profiles of troops and equipment by air, land, and sea. JFAST was used to guide deployment of troops and equipment in Bosnia and Haiti, and its predecessor, ADANS, was used for the Persian Gulf War.
- PCCS, led by Bob Hunter of CPED, is an information management system that was used to assist the Atlanta Police Department in providing security for the Summer Olympic Games in 1996. Vice President Al Gore described this system as pioneering technology in his report to the president on the preparations for the Atlanta Olympics. Before the Olympics, the system assisted the Atlanta Police Department in planning for the Olympics. During the Olympics, the system gave the Atlanta Police Department a critical response advantage as it exercised responsibility for security of the area during the games. Currently, the PCCS helps the Atlanta Police Department in managing their resources on a daily basis.
- HASCAL, led by Brian Worley of CPED, stimates the health hazards from atmospheric releases of nuclear, biological, and chemical materials. For example, it is used to predict the transport of a hazardous cloud, the effects of hazardous material at geographic locations, and the geographic distribution or cross section of the atmosphere. HASCAL is currently used by the Department of Defense.
- OREMS, initiated by Ajay Rathi and John Sorensen, both of the Energy Division, is a simulation and analysis model for evacuation strategies based on the detailed network traffic flow simulation model. OREMS is currently used by the Department of Defense and the Federal Emergency Management Agency (FEMA).
- EMBYR is a computer fire-simulation tool used to investigate the causes and consequences of large-scale fires such as those in Yellowstone National Park during 1988. This tool will be used to investigate what-if scenarios related to possible landscape-scale effects of variations in fire frequency, fire management, and global climate regimes over time scales ranging from complete fire seasons to millennia. William Hargrove of CPED participated in the development of EMBYR.
A New Center at ORNL
To build a collaborative environment by combining advances in computing and information science with many software applications, fundamental research in semantic modeling, object technologies, software engineering, and data fusion and analysis is necessary. In addition, the process to build a collaborative environment, and the process to establish a collaborative session must be researched and standardized.
The Collaborative Technologies Research Center (CTRC) was recently established in CSMD to perform this fundamental research with the goal of enabling adaptive tools, languages, and problem solvers to be brought together across time and space for a vast array of domains: enterprise management, disaster management, education, banking and finance, and engineering design and manufacturing. CTRC partners with groups from government, industry, and universities to accomplish its core research.
As a part of the core research, semantic modeling provides the glue necessary for establishing and maintaining collaborative environments and warrants the most explanation. Semantic modeling attempts to provide a more capable structure to software objects than is provided by traditional software engineering approaches. These models provide for definitions of interobject connectivity, behaviors, and information in a way that supports expressive and powerful manipulation, organization, querying, and searching for information. An important trait of semantic modeling is that it is designed to support the building of (1) a shared understanding of problem domains by humans, and (2) unified models of application domains that are transferable across multiple software components. This trait is very important when multiple software components need to interoperate, yet have not been designed for interoperation or cannot be designed by a tightly interacting design team.
CTRC researchers are leading the RAPID-DM effort described in the introductory scenario and are partnering with ORNLs disaster management software developers and Carnegie Mellon University. (To find out more about CTRC activities, please visit our World Wide Web home page at http://www.epm.ornl.gov/ctrc.
Building a Collaborative Environment
Fig. 1. The envisioned ORNL collaborative disaster management environment will make interoperable a number of ORNL-developed software systems to provide transportation planning and scheduling (JFAST), evacuation analysis (OREMS), toxic plume movement prediction (HASCAL), and police command and control (PCCS). Fig. 2. ORNLs approach to crisis management is based partly on semantic meta-modeling. Development of semantic meta-modeling language provides a common basis for software and domain languages and an approach for building shared meaning and evolving languages.
How is a collaborative environment really built and how does it work? Consider what must be done to enable rapid communications and coordinated responses to the disaster management scenario outlined at the beginning of this article. Figure 1 illustrates a collaborative disaster management environment that enables such rapid communication and coordinated response. However, this environment assumes a significant collection of interoperating legacy systems. To understand the complexity of the building task, keep in mind that the software developers never designed or interacted to make the software inter-operate in the first place. Hence, legacy software makers and collaborative tool developers create and publish models of their software so that others may understand the intended behavior and function of the software. The emergence of standards in the software engineering community is enabling development of common problem domain-specific languages. These languages are derivable from common metalanguages (e.g., Unified Modeling Language), making development of shared, agreed-upon languages a distinct possibility. Figure 2 illustrates this situation.
The domain experts negotiate models of the domain with the software developers to ensure proper interaction between the software models and collaborative tools and models of the domain. This creation, negotiation, and publishing of models will be done using tools provided by CTRC, which include:
- a software modeling and wrapping tool to enable independent modeling and code-wrapping of component software by software developers;
- a domain modeling tool to enable development and management of models that capture the properties of the problem domain;
- a modeling support workbench that helps an integration engineer support application and domain modeling through intelligent user interfaces (so that, for example, the integration engineer can use the support workbench to discover, retrieve, and compare software patterns from a repository of software models);
- a communication language that provides a medium for interaction with and querying of software models; and
- a systems configuration interface builder to enable building application-specific interfaces for end users to configure new collaborative environments.
Before users can rapidly establish a collaborative session during the planning process or the actual event of a disaster by employing the tools described, the environment must be built. As a feasibility study for this research, ORNL scientists are building a test environment. The disaster management tools (e.g., HASCAL, OREMS) are modeled by ORNL software developers, and the disaster management model is defined by an integration engineer working with disaster management experts. The models are published in a repository from which they can be retrieved as needed for a collaborative session.
Using a Collaborative Environment
Having built a collaborative environment so that many software-application and communication mechanisms can now interoperate, how is the environment used? Consider this disaster management scenario. A biologically contaminated plume is discharged into the atmosphere in the Atlanta region. Upon learning about the crisis, a representative from FEMA searches for services to provide information about road accessibility in a biological atmospheric hazard in the Atlanta region. HASCAL, one of the computational systems that deals with this type of problem, predicts the directions in which the contaminant plume will move. The FEMA representative examines the performances and requirements of the candidate systems (e.g., overall response time, required data feeds, and administrator requirements) before selecting a system. In this instance he selects HASCAL, calls up a virtual conferencing link with the HASCAL administrator, and communicates the requirements for the service.
The HASCAL administrator establishes the data feeds necessary for the application. She issues a request to the disaster management information broker to identify alternative data sources in the Atlanta region that are capable of supplying field measurement data for the specified biological disaster, weather feeds, and a GIS map of the region. The administrator then selects and interconnects the optimal services from the broker-provided list. This process allows the HASCAL application to operate with the latest local data to make predictions about the plumes toxicity and movement given the current situation.
In the meantime, the FEMA representative establishes connection, via an intelligent multicast tool from the repository of collaborative tools, with the regional disaster relief headquarters, the police department, the fire department, and the medical emergency squad. In addition to the visual and textual link with the team members, the representative chooses to communicate that certain roads are closed because they are in the area of the expected plume flow. This communication is done by broadcasting the information about the inaccessible roads and using a shared geographical map of the region. Next, the FEMA representative searches for evacuation planning services and identifies alternative systems to handle evacuation planning for the affected area. The representative chooses the OREMS system and brings it into the collaborative environment in a manner similar to the HASCAL system with its output overlaid onto the shared geographic map.
This scenario demonstrates how two applications, HASCAL and OREMS, and collaborative tools such as the intelligent multicast are rapidly combined in an environment to allow remotely located decision makers from various agencies and departments to define an evacuation plan using the latest available information. The ultimate goal is to provide a mechanism for many applications, tools, and experts to work in concert to solve complex problems. For example, one can easily imagine adding to and/or deleting from the collaborative session other software applications and invoking additional collaborative tools as the situation evolves.
The ORNL disaster management technologies provide a grassroots opportunity to test the use of collaborative technologies in solving a real-world problem.
Although much research is still required, the advances being made by ORNL researchers push collaborative computing to the leading edge. In DOE 2000, important discoveries are being made with regard to the remote control of instruments, steering of computational simulations, and electronic storage (in a shared or private manner) of important findings from experimental results. All of the technologies resulting from DOE 2000 efforts will enable scientists from around the world to work in collaboration as if they were in the same laboratory. Other efforts such as CME, Netsolve, and data mining contribute to collaborative environments. CME enables collaborative management of thousands of research projects. Managers and researchers can use a combination of technologies developed from this and DOE 2000 efforts to collaborate much in the same way that scientists use the technologies in the laboratory. Netsolve provides the capability to find and manage hardware and software resources needed for a collaborative session. Data mining and storage capabilities aid in validation, knowledge discovery, and efficient storage of very large amounts of data found in computationally intensive problems. Advances in semantic modeling, object technologies, and software engineering provide the glue to enable the software, hardware, and users to work together in a common environment. The ORNL disaster management technologies provide a grassroots opportunity to test the use of collaborative technologies in solving a real-world problem. As more insight is gained, the hope is that such technologies can be used by researchers from around the world to jointly study and solve many complex problems that are otherwise insolvable.
KIMBERLY D. BARNES is director of the Collaborative Technologies Research Center in ORNLs Computer Science and Mathematics Division. She has an M.S. degree in accounting from the University of Tennessee at Knoxville and a B.S. degree in business administration from Tennessee Technological University. Her research interests are accounting information systems, data mining, information technologies, and electronic commerce. The centers research projects include development of the Financial Automated On-line User System, Collaborative Management Environment, Financial Services Technology Consortium (FSTC) Electronic Check, FSTC Bank Internet Payments System, Mining Large Multimedia Data Sets, and Gas and Oil National Information Infrastructure.
JOHN W. COBB is chief information officer and technical assistant for the Spallation Neutron Source project managed at ORNL. Previously, he worked in the Computing, Information, and Networking Division as the ORNL scientific computing coordinator and user advocate. He has a Ph.D. degree in physics from the University of Texas at Austin. His dissertation concerned computational and theoretical research on advanced fusion power concepts. He came to ORNL to work on a Fusion Energy Division project to build predictive computational simulation models for semiconductor processing tools. His research interests include plasma physics, plasma astrophysics, high-performance computing, and fusion energy. He is a member of the American Physical Society, American Vacuum Society, Materials Research Society, and the Association of Computing Machinery.
NENAD IVEZIC is a research staff member at the Collaborative Technologies Research Center of ORNLs Computer Science and Mathematics Division. He received his Ph. D. degree in 1995 in computer-aided engineering from Carnegie Mellon University. His interests are in the areas of software engineering, machine learning, engineering design, and collaborative technologies. He has been involved in the research and development of machine learning applications for a number of engineering decision support systems. He is investigating technologies that enable collaborative work including semantic modeling, ontology engineering, and shared work environments
Contents | Search | Mail | Review Home Page | ORNL Home Page