- Human agents have difficulty determining the AI agent’s intentions, performance, future plans, and reasoning process through simple observation,
- AI agents can be unpredictable,
- AI agents are difficult to direct, especially when operations do no go according to plan,
- Because they have no capacity to decide for themselves, AI agents cannot be held accountable for their actions,
- Human agents cannot be assured of a mutual understanding of the common goals of an operation.
In order to address these issues, Dr. Jessie Chen, Senior Research Psychologist at the Army Research Laboratory (ARL), and his team developed the Situation Awareness-based Agent Transparency (SAT) model. Participants to the research expressed that they felt the AI agents “as more trustworthy, intelligent and human-like”. At present, Chen and his team are exploring the possibility of a bidirectional model to better improve collaboration between human and AI.
On one hand, this is good news because this program will provide greater efficiency in AI-human cooperation. On the other hand, this development may also spell better warfare. We’ve all heard Vladimir Putin say it: the nation that leads AI will be the ruler of the world. Could this the beginning of AI warfare?
Army scientists improve human-agent teaming by making AI agents more transparent
U.S. Army Research Laboratory scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency, which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.
https://phys.org/news/2018-01-army-scientists-human-agent-teaming-ai.html