rmgill Posted June 2 Share Posted June 2 This all sounds particularly bad… USAF Chief Says AI-Drone Killed Human Operator During Simulation Test: Report The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted by the service branch using an AI-enabled drone killed its human operator. The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission. Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society. “We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” “We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements. The commander reportedly aided in developing the life-saving Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s that can take control of an aircraft heading toward ground collision and other cutting-edge automated jet technology that can dogfight. The U.S. Department of Defense’s research agency, DARPA, announced the ground-breaking technology during the Defence IQ Press interview in December 2022. Link to comment Share on other sites More sharing options...
Sardaukar Posted June 2 Share Posted June 2 What could go wrong... Link to comment Share on other sites More sharing options...
BansheeOne Posted June 2 Share Posted June 2 It's a nice story, but as has been pointed out, not really logical: - If the system is dependent upon the operator's go to kill a threat, it would also need the go to kill the operator. - If it decided to kill the operator without a go, it could also decide to kill threats without a go, obviating the need to kill the operator. Link to comment Share on other sites More sharing options...
Ol Paint Posted June 2 Share Posted June 2 On 5/25/2023 at 11:38 AM, sunday said: After Grumman's demise, there is at last a weapon system whose name will please @urbanoid U.S. Air Force Tests ALQ-167 Angry Kitten ECM Pod On MQ-9 Reaper Maybe someone on the team was a fan of No One Lives Forever 2. https://nolf.fandom.com/wiki/Angry_Kitty Doug Link to comment Share on other sites More sharing options...
sunday Posted June 2 Share Posted June 2 6 minutes ago, Ol Paint said: Maybe someone on the team was a fan of No One Lives Forever 2. https://nolf.fandom.com/wiki/Angry_Kitty Doug Could be, could be Link to comment Share on other sites More sharing options...
bfng3569 Posted June 2 Share Posted June 2 11 hours ago, rmgill said: This all sounds particularly bad… USAF Chief Says AI-Drone Killed Human Operator During Simulation Test: Report The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted by the service branch using an AI-enabled drone killed its human operator. The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission. Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society. “We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” “We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements. The commander reportedly aided in developing the life-saving Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s that can take control of an aircraft heading toward ground collision and other cutting-edge automated jet technology that can dogfight. The U.S. Department of Defense’s research agency, DARPA, announced the ground-breaking technology during the Defence IQ Press interview in December 2022. Business Insider says it has now received a statement from Ann Stefanek, a spokesperson at Headquarters, Air Force at the Pentagon, denying that such a test occurred. "The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," she said, according to that outlet. "It appears the colonel's comments were taken out of context and were meant to be anecdotal." At the same time, it's not immediately clear how much visibility Headquarters, Air Force's public affairs office might necessary have about what may have been a relatively obscure test at Eglin, which could have been done in an entirely virtual simulation environment. The War Zone had reached out to the 96th Test Wing about this matter and has not yet heard back. Link to comment Share on other sites More sharing options...
Harold Jones Posted June 2 Share Posted June 2 Clearly the PA person wasn't familiar with the various code phrases that indicate that the story you are about to hear is if not completely false then at best wildly exaggerated. These include but are not limited to, "I shit you not..." and "When I was in the cav..." Link to comment Share on other sites More sharing options...
Josh Posted June 2 Share Posted June 2 Without the context of the exercise, I don't think can know how serious this was. It could simply be a poorly set up simulation. The fact that the decision engine somehow could be restrained against engaging targets but wasn't restrained from taking actions against the operator points to a scenario that wasn't very well thought out or geographically realistic in the first place. I would presume any UCAV operation would be geo fenced and that most commands would be given at extreme distances relative to the target. It sounds like there was no ROE in place and instead a points system established, with the AI intentionally allowed to go wild to achieve "points". But again, too little context to make much of it. Link to comment Share on other sites More sharing options...
BansheeOne Posted June 2 Share Posted June 2 Thought so: Quote [UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ Link to comment Share on other sites More sharing options...
Yama Posted June 2 Share Posted June 2 (edited) So in this form the story sounds quite less alarming: sounds like they just set up a paper scenario, "What if we gave combat AI very simple instructions and restrictions to operate within?" In this case, AI would get 'points' from killin' stuff, but human operator always could stop them. Also it was forbidden from killing the operator. So they asked around what could go wrong in this scenario, and somebody (probably only one who had ever read science fiction) pointed out that the AI could always blow up the control link, and then proceed to kill whatever the heck it wanted. Not sure why they needed a committee to figure that out, those ideas have been studied in literature for decades if not centuries before. Even Asimov came up with loopholes for his famed Three Laws. That, or it was just a sanitized version and they really do have killer Reaper on the loose... Edited June 3 by Yama Link to comment Share on other sites More sharing options...
Josh Posted June 2 Share Posted June 2 The speaker stated after the fact this was a thought experiment...take that as you will... [UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] Link to comment Share on other sites More sharing options...
rmgill Posted June 3 Share Posted June 3 I want it to not be true. I fear if it is that the right people won’t learn the right lessons. What the Daily Wire has for sources is a question. Link to comment Share on other sites More sharing options...
BansheeOne Posted June 24 Share Posted June 24 This is cool: Avilus "Grille" MEDEVAC drone. Fully electric 240 kW drive permitting a range of 51 kilometers at up to 86 kph speed and 7,000 feet altitude with max take-off mass of 695 kg, of which 135 payload. Also includes a ballistic emergency parachute. Entire "DroneEvac" system with command module and support trailer fits into a 20-foot container and can be deployed by two soldiers in 15 minutes. Link to comment Share on other sites More sharing options...
DB Posted June 25 Share Posted June 25 That would be ideal for a lot of mountain rescue missions, but may need more altitude capability. Probably can trade that for casualty mass. Link to comment Share on other sites More sharing options...
lucklucky Posted June 25 Author Share Posted June 25 I just get the sensation that the rotor blades are too near people. But need a human scale to get a better sense. Link to comment Share on other sites More sharing options...
DB Posted June 26 Share Posted June 26 I assume it's electrically powered, so there's no reason for the blades to turn when it's on the ground. People would have to step back to allow it to take off. As for size - assume that it will take a stretcher, so the pod needs to be about 2m long, which makes it maybe 1.5m tall? Link to comment Share on other sites More sharing options...
bfng3569 Posted September 8 Share Posted September 8 Apaches getting new mast sensor to command and communicate with drones. https://www.thedrive.com/the-war-zone/this-is-what-the-ah-64-apaches-new-extended-rotor-mast-does Link to comment Share on other sites More sharing options...
sunday Posted September 8 Share Posted September 8 21 minutes ago, bfng3569 said: Apaches getting new mast sensor to command and communicate with drones. https://www.thedrive.com/the-war-zone/this-is-what-the-ah-64-apaches-new-extended-rotor-mast-does Interesting comments in that article. Crew approaching work saturation? Link to comment Share on other sites More sharing options...
bfng3569 Posted September 8 Share Posted September 8 28 minutes ago, sunday said: Interesting comments in that article. Crew approaching work saturation? leaves we wondering if we see the return of 2 seater's in the NGAD and Naval version for the same reason, with the intent to control drones and AI wingmen etc. seems like its getting to be a lot for a single person who also has to fly the plane... Link to comment Share on other sites More sharing options...
Stuart Galbraith Posted September 9 Share Posted September 9 They were suggesting this a number of years ago. It would make more sense to have a Utility Helicopter, such as a Blackhawk, fitted out with a drone cabin, or at least figure out to put a system relay on attack Helicopters and try to control it from miles away. Link to comment Share on other sites More sharing options...
DB Posted September 11 Share Posted September 11 And suddenly I'm thinking of Arthur C Clarke's story about a remotely piloted vehicle crash. The drone controller should not need to be a pilot. They should more like a shepherd controlling a sheepdog to then control the sheep. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now