The troops of tomorrow may be able to pull the trigger using only their minds. As artificially intelligent drones, hacking, jamming, and missiles accelerate the pace of combat, some of the military’s leading scientists are studying how mere humans can keep up with the incredible speed of cyber warfare, missiles and other threats.

One option: Bypass crude physical controls — triggers, throttles, keyboards — and plug the computer directly into the human brain. In one DARPA experiment, a quadriplegic first controlled an artificial limb and then flew a flight simulator. Future systems might monitor the users’ nervous system and compensate for stress, fatigue, or injury. Is this the path to what the Pentagon calls human-machine teaming?

This is an unnerving scenario for those humans, like Stephen Hawking, who mistrust artificial intelligence. If your nightmare scenario is robots getting out of control, “let’s teach them to read our minds!” is probably not your preferred solution. It sounds more like the beginning of a movie where cyborg Arnold Schwarzenegger goes back in time to kill someone.

But the Pentagon officials who talked up this research yesterday at Defense One’s annual tech conference emphasized the objective was to improve human control over artificial intelligence. Teaching AI to monitor its user’s level of stress, exhaustion, distraction, and so on helps the machine adapt itself to better serve the human — instead of the other way around. Teaching AI to instantly detect its user’s intention to give a command, instead of requiring a relatively laborious push of a button, helps the human keep control — instead of having to let the AI off the leash because no human can keep up with it.

Official Defense Department policy, as then-Secretary Ash Carter put it, is that the US will “never” allow an artificial intelligence to decide for itself whether or not to kill a human being. However, no less a figure than the Carter’s undersecretary of acquisition and technology, Frank Kendall, fretted publicly that making our robots wait for human permission would slow them down so much that enemy AI without such constraints would beat us. Vice-Chairman of the Joint Chiefs, Gen. Paul Selva, calls this the “Terminator Conundrum.” Neuroscience suggests a way out of this dilemma: Instead of slowing the AIs down, make the humans’ orders come faster.

Accelerate Humanity

“We will continue to have humans on the loop, we will have human input in decisions, but the way we go about that is going to have to shift, just to cope with the speed and the capabilities that autonomous systems bring,” said Dr. James Christensen, portfolio manager at the Air Force Research Laboratory‘s 711th Human Performance Wing. “The decision cycle with these systems is going to be so fast that they have to be sensitive to and responsive to the state of the individual (operator’s) intent, as much as overt actions and control inputs that human’s providing.”

RELATED: Doctor’s Don’t Want You To See This Diabetes Alternative

In other words, instead of the weapon system responding to the human operator physically touching a control, have it respond to the human’s brain cells forming the intention to use a control. “When you start to have a direct neural interface of this type, you don’t necessarily need to command and control the aircraft using the stick,” said Justin Sanchez, director of DARPA‘s Biological Technologies Office. “You could potentially re-map your neural signatures onto the different control surfaces” — the tail, the flaps — “or maybe any other part of the aircraft” — say landing gear or weapons. “That part hasn’t really been explored in a huge amount of depth yet.”