About the Journal
Contents All Volumes
Abstracting & Indexing
Processing Charges
Editorial Guidelines & Review
Manuscript Preparation
Submit Your Manuscript
Book/Journal Sales
Contact


Cosmology Science Books
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon
Order from Amazon


Journal of Cosmology, 2010, Vol 12, 3825-3833.
JournalofCosmology.com, October-November, 2010

Evolution of Electronic Partners:
Human-Automation Operations and ePartners During Planetary Missions

Mark A. Neerincx, Ph.D.1, and Tim Grant, Ph.D.2
1TNO / Delft University of Technology, P.O. Box 23, 3769 ZG Soesterberg, The Netherlands
2Netherlands Defence Academy, P.O. Box 90.002, 4800 PA Breda, The Netherlands,


Abstract

This paper presents the five-stage evolution of ePartners to support joint human-automation operations during planetary missions. In the final (fifth) stage of evolution, maturity is established: ePartners and astronauts are supporting, learning, and teaching each other in collaboration. Following a situated Cognitive Engineering methodology, ePartner functions are being developed incrementally and systematically tested for their claims on the social, cognitive, and affective outcomes. The empirical and analytical tests employ (1) analogue environments in laboratory and field settings (e.g. MARS500 and Concordia, respectively), and (2) simulation tools of work organization and practices in mixed reality settings. The mature partnership will substantially improve the performance and resilience of human-automation teams that have to act under higher levels of autonomy.


Key Words: ePartner, electronic partners, evolution, situated Cognitive Engineering, decision support, learning and teaching.



1. Introduction

A human mission to Mars, like all manned deep-space missions, would require high levels of autonomy and resilience to establish safe and effective human-machine operations under dynamic and often high-demanding conditions (e.g. HUMEX, 2003; Kanas and Manzey, 2008). By designing for partnership between distributed human and automation actors, an adaptive heterogeneous system will be established that can cope with these conditions. The collection of connected electronic partners (ePartners) will act in a ubiquitous computing environment, cooperating with the astronauts to train, prepare, or plan for future actions, and to assess current situations, to determine a suitable course of actions to solve a problem, and to complement astronauts’ task performance. Each personal ePartner predicts its crew member’s momentary support needs by on-line gathering and modelling of human, machine, task, and context information. The partnership is based on a set of policies that define the different work organizations, such as the authority to re-allocate a rover in emergency and the obligation to ask for (tele-)support when local skills to deal with the emergency are lacking. The communication with the ePartner is "natural or intuitive", based on a common reference or ontology for consistent and coherent interpretations of the human and automation communication acts.

To develop the envisioned joint cognitive system, we follow an evolutionary approach that integrates new technology (e.g. on affective and pervasive computing) and human factors theories (i.e., accepted features of human social, cognitive, and affective processes). ePartner’s social, cognitive, and affective functions are incrementally designed in an iterative process that systematically takes account of the adaptive nature of both astronauts’ and ePartners’ behavior with their reciprocal dependencies. These functions are inspired by human cognition, but do not have to be similar as they will build on the specific computer capacities. ePartners can both act in the real and the virtual world. The added value of acting in a virtual world would be to try out possible responses to situations, to demonstrate the consequences of available options, and to give recommendations to the (human) user, e.g., for proficiency training or for what-if simulations as decision support.

The purpose of this paper is to outline the evolution of ePartners to support joint human-automation operations on planetary surfaces. In this outline, they mature in five development stages: their functionalities originate and shape incrementally, and grow via learning and teaching. On-going empirical tests result in progressive refinement ("mutation") and validation ("survival"). Section 2 describes each of the stages and the current state of the art of implementing them. Section 3 focuses on the final stage, the mature ePartner. Section 4 outlines the design process that should establish such maturity. Section 5 draws conclusions and discusses the way ahead.

2. Electronic Partner Evolution

2.1 Evolution scheme. With the benefit of over 60 years of digital computing technology, we can identify a number of stages in the evolution of the relationship between a computer or ePartner unit and its human user. Based on observing systems to date and the first laboratory manifestations of new ePartner functionalities, we can predict how this evolution is likely to progress. Figure 1 depicts our views on the evolution scheme as five stages. Today’s best-of-breed operational systems are at stage (2), while more conservative systems are still at stage (1).

During stage (1), the role of digital computers is seen as being limited to delivering information to users. The computer gathers data, processes it, and informs the user of the results. Computing is termed "number-crunching" or "data processing". The emphasis in developing stage (1) systems is two-fold: usability and ubiquity. Usability has been improved by drawing on research into individual psychology (primarily human perception and motor capabilities), ergonomics, and human factors. Ubiquity has been made possible by engineering advances, such as miniaturization, standardization, and ruggedness (e.g., Weiser, 1993). The archetype of stage (1) is the Supervisory Control And Data Acquisition (SCADA) category of software applications.

Figure 1. Five stages in the evolution of ePartner functionality.

In stage (2), the role of computers takes a step forward from merely informing the users. Computers now support their users’ cognitive processes, such as searching for information, diagnosing abnormal situations, and planning future activities. They may be termed "(decision) support systems" or "digital assistants". The delivery of support is still one way: from the computer to users. The development emphasis is on the user experience. Cognitive support may be extended to supporting the users’ affective processes. In its simplest form, this may take the form of enabling users to rank, score, tag, or comment on informational content, whether this be a downloaded video clip or piece of music or the service provided by an Internet shopping site. More advanced systems may try to capture to the user’s emotional state, e.g. using facial or voice recognition (e.g., Picard, 1997).

The key step made in stage (3) is that support now becomes two-way. Arguably, affective computing in stage (2) could be regarded as two-way support, because in capturing the user’s emotional state the user is in effect supporting the ePartner (perhaps involuntarily). However, stage (3) goes deeper, with the emphasis on mixed-initiative computing. As the term suggests, either the ePartner or the user can take the initiative in solving a problem or reacting to a situation. Typically, this takes the form of the one offering the other some hints or suggesting aspects of the problem to focus on. The converse is delegation. Stage (3) should allow either the user or the ePartner to delegate the solution of parts of a problem to the other, depending on the situation or the respective capabilities of and the information available to user and computer (e.g., Schneider-Hufschmidt et al., 1993).

Stage (4) adds learning capabilities to the ePartner, including the application of learning to making sense of novel situations. Learning will be aided by maintaining a history of the interactions between the ePartner and the user and by enabling the ePartner to assimilate the user’s explanations of his/her actions and decisions. A necessary condition for learning to be regarded as stage (4) is that the ePartner must be able model its own design space. This implies that the ePartner will have to apply situated Cognitive Engineering techniques in modifying its design as it learns (e.g., Neerincx, 2010).

The final step achieved in reaching stage (5) is that both the ePartner and its user should be able not just to learn, but also to teach the other, requiring both to possess didactic capabilities. One application of stage (5) functionality would be in Just In Time Training (JITT). For example, a human crew could be launched on a planetary mission without all the training needed to operate the habitat on arrival. The ePartner would teach them the necessary skills as the need arises.

2.2 Existing systems. We can apply the proposed scheme of evolution to existing systems for supporting human-automation operations. Space station software is designed to gather housekeeping and payload data, to process it, and to display it to astronaut users. Like current military Command and Control systems, the emphasis is on delivering information to allow the users to gain and maintain their situation awareness. Hence, they are at stage (1).

In present-day manned space operations the functionalities that would support the users’ cognitive processes are largely confined to the ground-based mission control systems. This is a valid design choice so long as the ground-space communications link is reliable and the decision-cycle time from spacecraft to mission control and back is short. Grant et al (2006) have argued that the light travel time will force the designers of planetary missions to migrate mission control system functionalities such as diagnosis and planning to astronauts’ ePartners in order to cope with the dynamic situational conditions. The combination of a higher level of crew autonomy, limited (human) resources, and an extreme environment requires stage (2) support as an absolute minimum.

Figure 2: The MARS500 prototype, providing feedback on cognitive task load, emotion and performance, and enabling the collection of user experiences during collaborative training.

2.3 Research manifestations Elements of the functionalities beyond stage (2) are already being tested in the laboratory. For example, SCOPE demonstrated simple mixed-initiative diagnostic support for astronauts (Bos et al., 2004). The objective was to design context-specific support that is integrated into astronaut’s momentary task performance and, subsequently, to establish a mixed-initiative human-automation collaboration in which the automation maintains a "world model" to diagnose possible "unhealthy" machine states and starts specific mitigation strategies (Fault Detection Isolation and Recovery). The specific diagnosis and mitigation actions can be divided among human and automation actors. This is a manifestation of an element of stage (3) functionality. In the Mission Execution Crew Assistant (MECA) project, affective computing functionality is being tested in laboratory or field analogues of Martian conditions, e.g. during the Mars500 experiment in Russia (a 520 days’ mission of a six-person isolated crew) and the Concordia experiment at Antarctica (12 persons in isolation during the winter). These experiments comprise manifestations of stage (4) and limited stage (5) functionality (Van Diggelen and Neerincx, 2010). First, via "Collaborative Training" with rotating trainer and trainee roles, the ePartner guides the joint training of procedures, assessing the interpersonal communication, and mediating knowledge sharing from the "expert" to the "novice" crew member. Second, via entertainment gaming the ePartner is an "affecter" that improves the crew member’s mood. Third, via a goal-based timeline tool the ePartner motivates the crew members to perform certain activities for establishing personal goals (e.g., learning objectives) and to maintain a personal (episodic) memory (see Fig 2). During all three use cases, the ePartners monitor crew members’ cognitive task load, collaboration, emotional states, and performance, to better learn how they relate. This monitoring is part of the user experience sampling functionality that supports astronauts to reflect on previous activities and happenings in a constructive way to improve resilience.

3. Description of Mature Electronic Partners

The complete system consists of ePartners and their users. Users may be humans, e.g. astronauts operating on the planetary surface, but may also be intelligent artefacts, e.g. mobile rovers. We assume that each user has his, her, or its own ePartner unit. For clarity, in the remainder of this section we will describe users as if they are humans. The ePartner will be assumed to have stage (5) functionalities.

3.1 Unit functionality An ePartner unit’s environment contains its user, other objects, and other ePartners and their users. The relationship between a unit and its user is described in the next section, and between a unit and other ePartners is described in section 3.3. This section considers an ePartner unit in isolation.

A number of information flows within and through an ePartner unit can be identified, as follows:

The unit maintains models of its user, its environment, and itself. These models contain a dynamic component, e.g. describing the current (known) state of the modelled entity, and a static component, e.g. describing the (known) norms and values of the modelled entity.

The unit can gather information on the physical state of its user and other objects. It updates the appropriate model, and informs its user of this information as necessary.

The unit can gather information on the cognitive and affective states of its user. Based on this information, it updates its user model, and provides the user with appropriate support as necessary.

The unit can accept information and support provided by its user, updates its models, and adjusts its processing accordingly.

The unit can learn from the information it gathers and from support provided by its user. Learning may involve modifying the user, environment, and/or self model.

The unit can identify learned information that the user does not have, but is likely to need.

The unit can teach its user learned information that the user does not already have, using the most suitable didactic methods.

The unit can accept teaching from its user, assimilating this into the user-, environment-, and/or self-model.

3.2 Relationship with user The relationship between an ePartner unit and its user is – as the name suggests – one of partnership. The ePartner’s purpose is to maximise the effectiveness and efficiency of the unit-user dyad. Note that the ePartner is only capable of informational actions. It must achieve physical actions through its user (or other ePartner's users).

Figure 1 shows, ePartner units and their users provide each other with information and support. Moreover, each is capable of learning, and each is capable of teaching the other. The unit can accept and execute tasks given it by its user, providing it has the resources to do so. If not, the unit may reject the task it is given, justifying this to its user. If the user permits, the unit may delegate tasks to its user when the user is more capable of or has more capacity to perform these tasks. Hence, task acceptance and rejection may involve a negotiation process between the unit and its user. Task execution requires a planning capability.

3.3 Networking teams Our analysis primarily concerns the ePartner unit and its user. However, some indications can be given about how our analysis might be extended to the complete system consisting of multiple ePartners and users.

Users are members of teams. They can join and leave teams, and they can be a member of multiple teams simultaneously. For example, a user can be a member of the geological team for exploration purposes, as well as being a member of the team that cooks this week’s evening meals. Other qualities could also be modelled as team membership. For example, an astronaut and his or her family could be regarded as a "team". Astronauts of a particular nationality or gender or with a particular profession or specialisation could be modelled as members of a "team". Similarly, non-human users of a particular type could be regarded as members of a "team".

The ePartner is not a team member in its own right, but takes on the membership of whichever teams its user is a member of. Part of the support an ePartner unit provides its own user is to facilitate communication between the user and its team members. This communication passes between the respective ePartners, and, beyond sharing information, might encompass providing support to, teaching, and tasking other ePartners and their users. Moreover, an ePartner unit can support its user by networking to find other users, perhaps from outside the user’s current set of teams, to enable the creation of new teams.

3.4 Example scenario An example scenario illustrates the operational capabilities of ePartners with stage (5) functionality.

For the purposes of the scenario, we assume that there are two teams engaged in geological exploration of the Martian surface (Figure 3).

Team A consists of astronauts Anne and Albert and rovers 1 and 2. Team B consists of astronauts Benny and Brenda, but no rovers. Astronaut Herman is in overall charge from the home base. Each astronaut and rover has an ePartner.

The events in the scenario step by step are as follows:

Benny’s spacesuit fails. Pressure and temperature loss will lead to him fainting, suffering frostbite, and eventually death.

Benny’s ePartner senses the failure of the spacesuit, informs Benny, but, judging that Benny will quickly need help, also informs Brenda (via her ePartner).

Through their respective ePartners, Brenda informs Herman of the emergency. Her ePartner detects that she is scared, and offers appropriate support.

Aided by his ePartner, Herman quickly develops a plan to recover Benny to the home base and treat him for frostbite.

Via his ePartner, Herman instructs Rover 1 to leave Team A, to traverse to Team B’s location, to join Team B, and to recover Benny to the home base.

Nobody has the skills to treat frostbite. Nevertheless, Herman instructs Albert through their ePartners to return to the home base to be prepared to treat Benny, while broadcasting a request for knowledge about treating frostbite.

Rover 2’s ePartner has some relevant knowledge of treating frostbite from a previous mission, and starts teaching Albert (via his ePartner) while he returns to the home base.

This scenario illustrates giving information (step 2), giving cognitive (step 4) and affective (step 3) support, and teaching (step 7). It also illustrates joining and leaving teams (step 5) and networking (step 6).

Figure 3: Snapshots from a storyboard of a MARS scenario in which ePartners help to deal with a failure of Benny’s space suit.

4. Electronic Partner Development

Following a situated Cognitive Engineering (sCE) methodology, ePartner’s incremental development consists of a continuous design, evaluation and refinement of its social, cognitive, and affective functions (Neerincx, 2010). Extensive analyses of the operational demands, human factors aspects, and (future) technological capacities have resulted in an initial (functional) Requirements Baseline, including a specification of its design rationale. The design rationale contains a definition of claims (i.e. hypotheses on the upsides and downsides of specific ePartner’s functions), which refer to specific situational demands (use cases) and, if possible, accepted features of social, cognitive, and affective processes (e.g. on sense making, situation awareness, and cognitive task load; Neerincx et al., 2008). A claim is valid when the upsides are significantly higher than the downsides, providing the justification of the corresponding requirements.

The evaluation of ePartner’s functions should encompass the emergent behaviour of both the human and the ePartner (i.e., their reciprocal adaptive behaviour). Fidelity refers to an adequate representation of relevant rules in a human-agent team that are addressed in the evaluation and specifically the dependencies. The realism can vary from one extreme—the real environment—to the other, a virtual environment. A combination of different human-in-the-loop evaluations and simulations is needed to establish adequate fidelity and overall realism (Smets et al., 2010).

For simulations with appropriate fidelity, the actors and environment can be modeled in the Brahms work practice modeling tool (Sierhuis et al., 2007), and the organization with the KaoS policy model (Uszok et al., 2008). The authorization policies specify which actions an actor is permitted to perform or not; the obligation policies specify which actions an actor must perform or waive. Actors can ignore policies in order to cope with unforeseen situations. The simulation platform allows for evaluating the consequences of situated behaviour with "real humans" in-the-loop who act in mixed reality i.e., with real and/or virtual team-mates, rovers, equipment...

5. Conclusions and the Way Ahead

This paper presented how ePartners are evolving to support joint human-automation operations on planetary surfaces. For core ePartner functions, an initial Requirements Baseline has been specified and is being shared with an international R&D community. This specification includes claims on the social, cognitive, and affective effects of ePartners and astronauts who support, learn and teach in collaboration. These claims are being refined and validated to establish a sound situated theory on human-automation partnership in space (Neerincx et al., 2008).

ePartners are non-human; some of their information processes are inspired by human cognition, but other processes utilize specific computer technologies. So, ePartners are a new kind of social actors; they are adaptive and evolve by learning from their environment (including the humans), e.g. by pattern recognition (Goertzel and Combs, 2010). Far from Earth, environmental conditions are different and evolve differently (Istock, 2010; Joseph and Schild 2010). The human explorers will adapt to these conditions and learn from the "learning ePartners’, so that a new (hybrid) social environment emerges with a new kind of intelligence (cf. Goertzel. and Combs, 2010). This evolution of ePartners does not involve Technological Singularity, i.e. the sudden creation of "smarter-than-human intelligence" due to self-reinforcing technological progress (Kurzweil, 2005). Because we are focusing on empowering joint cognitive systems, in which the human is always included, there is no single, isolated, sudden progress of the automation. The human is in control in the refinement and learning processes of the situated Cognitive Engineering methodology. An important challenge for this methodology is to design ethical and legal norms in the set of policies for the emergent behavior of ePartners and their human counterparts (cf., Value Sensitive Design, Friedman et al., 2006). For example, policies should be defined on possible no-go areas if an ePartner acts outside the "rules of engagement" set by its (human) user, or acts on behalf of its (human) user without first obtaining permission or, in emergency situations, justifying its actions afterwards.


Acknowledgements A large number of persons contributed to the research that is presented in this paper: Jurriaan van Diggelen, Jasper Lindenberg, Nanja Smets, Leo Breebaart, André Bos, Antonio Olmedo Soler, Uwe Brauer, Christian Knorr, Maarten Sierhuis, Jeff Bradshaw, and Mikael Wolff. Furthermore, several astronauts, domain and task experts, and MSc students participated in parts of the study. The ePartners are partly developed in the MECA project that is funded by the European Space Agency (contract numbers 19149/05/NL/JA and 21947/08/NL/ST).



REFERENCES

Bos A., Breebaart, L., Neerincx, M.A., and Wolff, M. (2004). SCOPE: An Intelligent Maintenance System for Supporting Crew Operations. Proceedings of IEEE Autotestcon (pp. 497–503).

Friedman, B., Kahn, P.H., and Borning, A. (2006). Value Sensitive Design and Information Systems. In: D. Galletta and P. Zhang (Eds.). Human-Computer Interaction and Management Information Systems: Applications (Chapter 16, pp. 348 – 372). Armonk NY: M.E. Sharpe Inc.

Goertzel, B. and Combs, A. (2010). Water Worlds, Naive Physics, Intelligent Life, and Alien Minds. Journal of Cosmology, 5, 897-904.

Grant, T., Olmedo Soler, A., Bos, A., Brauer, U., Neerincx, M.A., Wolff, M. (2006). Space Autonomy as Migration of Functionality: The Mars case. SMC-IT 2006: 2nd IEEE International Conference on Space Mission Challenges for Information Technology, pp. 195-201. Los Alamitos, California: IEEE Conference Publishing Services.

Istock, C. (2010). Life On Earth And Other Planets. Science and Speculation. Journal of Cosmology, 5, 890-896.

HUMEX (2003). HUMEX: A study on the survivability and adaptation of humans to long-duration exploration missions. ESA Special Publication 1264. Noordwijk, The Netherlands: ESA Publications Division.

Joseph R., and Schild, R. (2010). Biological Cosmology and the Origins of Life in the Universe. Journal of Cosmology, 5, 1040-1090.

Kanas, N.A., and Manzey (2008). Space Psychology and Psychiatry, 2nd edition. El Segundo, CA: Microcosm Press.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. New York: Viking.

Neerincx, M.A. Bos, A., Olmedo-Soler, A. Brauer, U. Breebaart, L., Smets, N., Lindenberg, J., Grant, T., Wolff, M. (2008). The Mission Execution Crew Assistant: Improving Human-Machine Team Resilience for Long Duration Missions. Proceedings of the 59th International Astronautical Congress (IAC2008), 12 pages. Paris, France: IAF. DVD: ISSN 1995-6258.

Neerincx, M.A. (2010). Situated Cognitive Engineering for Crew Support in Space. Personal and Ubiquitous Computing. Published online: 13 July 2010 ( HYPERLINK "http://www.springerlink.com" www.springerlink.com), 12 pages. DOI 10.1007/s00779-010-0319-3.

Picard, R.W. (1997). Affective computing. The MIT Press: Cambridge, MA.

Schneider-Hufschmidt, M., Malinowski, U., and Kühme, T. (1993). Adaptive user interfaces : principles and practices. Elsevier Science Inc. New York, NY, USA.

Sierhuis, M., W. J. Clancey, R. van Hoof (2007). Brahms: A multiagent modeling environment for simulating work processes and practices. International Journal of Simulation and Process Modelling, 3, 134-152.

Smets, N.J.J.M., Bradshaw, J.M., Diggelen, J. van, Jonker, C., Neerincx, M.A., Rijk, L.J.V. de, Senster, P.A.M., Sierhuis, M., and Thije, O. ten (2010). Assessing Human-Agent Teams for Future Space Missions. IEEE Intelligent Systems, 25(5), 46-53.

Uszok, A., Bradshaw, JM, Breedy, M., Bunch, L., Feltovich, P., Johnson, M. & Jung, H. (2008). New developments in ontology-based policy management: Increasing the practicality and comprehensiveness of KAoS. Proceedings of the 2008 IEEE Conference on Policy, IEEE Press, pp. 145–152.

Van Diggelen, J. and Neerincx, M.A. (2010). Electronic partners that diagnose, guide and mediate space crew’s social, cognitive and affective processes. In: Spink et al. (Eds.), Proceedings of Measuring Behaviour 2010. pp. 73-76. Wageningen, The Netherlands: Noldus Information Technology bv.

Weiser, M. (1993). Some computer science issues in ubiquitous computing, Communications of the ACM, 36, pp. 75-84.





The Human Mission to Mars.
Colonizing the Red Planet
ISBN: 9780982955239

Edited by
Sir Roger Penrose & Stuart Hameroff

ISBN: 9780982955208

Abiogenesis
The Origins of LIfe
ISBN: 9780982955215

Life on Earth
Came From Other Planets
ISBN: 9780974975597

Biological Big Bang
Panspermia, Life
ISBN: 9780982955222

20 Scientific Articles
Explaining the Origins of Life

ISBN 9780982955291

Copyright 2009, 2010, 2011, All Rights Reserved