Jump to content

The drone topic


Recommended Posts

This all sounds particularly bad…


USAF Chief Says AI-Drone Killed Human Operator During Simulation Test: Report

 

The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted by the service branch using an AI-enabled drone killed its human operator.

The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission.

Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society.

“We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements.

The commander reportedly aided in developing the life-saving Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s that can take control of an aircraft heading toward ground collision and other cutting-edge automated jet technology that can dogfight.

The U.S. Department of Defense’s research agency, DARPA, announced the ground-breaking technology during the Defence IQ Press interview in December 2022.

 

Link to comment
Share on other sites

  • Replies 169
  • Created
  • Last Reply

Top Posters In This Topic

It's a nice story, but as has been pointed out, not really logical:

- If the system is dependent upon the operator's go to kill a threat, it would also need the go to kill the operator.

- If it decided to kill the operator without a go, it could also decide to kill threats without a go, obviating the need to kill the operator.

Link to comment
Share on other sites

On 5/25/2023 at 11:38 AM, sunday said:

After Grumman's demise, there is at last a weapon system whose name will please @urbanoid

U.S. Air Force Tests ALQ-167 Angry Kitten ECM Pod On MQ-9 Reaper

 

Maybe someone on the team was a fan of No One Lives Forever 2.  https://nolf.fandom.com/wiki/Angry_Kitty

 

Doug

Link to comment
Share on other sites

11 hours ago, rmgill said:

This all sounds particularly bad…


USAF Chief Says AI-Drone Killed Human Operator During Simulation Test: Report

 

The U.S. Air Force warned military units against heavy reliance on autonomous weapons systems last month after a simulated test conducted by the service branch using an AI-enabled drone killed its human operator.

The Skynet-like incident was detailed by the USAF’s Chief of AI Test and Operations, Col. Tucker’ Cinco’ Hamilton, at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, who said the drone that was tasked to destroy specific targets during the simulation turned on the operator after they became an obstacle to its mission.

Hamilton pointed out the hazards of using such technology — potentially tricking and deceiving its commander to achieve the autonomous system’s goal, according to a blog post reported by the Royal Aeronautical Society.

“We were training it in simulation to identify and target a [surface-to-air missile] threat,” Hamilton said. “And then the operator would say ‘yes, kill that threat.’ The system started realizing that while they did identify the threat, at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system – ‘Hey, don’t kill the operator – that’s bad,” he continued. “You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Hamilton, who serves as the Operations Commander of the 96th Test Wing, has been testing different systems ranging from AI and cybersecurity measures to medical advancements.

The commander reportedly aided in developing the life-saving Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s that can take control of an aircraft heading toward ground collision and other cutting-edge automated jet technology that can dogfight.

The U.S. Department of Defense’s research agency, DARPA, announced the ground-breaking technology during the Defence IQ Press interview in December 2022.

 

Business Insider says it has now received a statement from Ann Stefanek, a spokesperson at Headquarters, Air Force at the Pentagon, denying that such a test occurred.

"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," she said, according to that outlet. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."

At the same time, it's not immediately clear how much visibility Headquarters, Air Force's public affairs office might necessary have about what may have been a relatively obscure test at Eglin, which could have been done in an entirely virtual simulation environment. The War Zone had reached out to the 96th Test Wing about this matter and has not yet heard back.

Link to comment
Share on other sites

Clearly the PA person wasn't familiar with the various code phrases that indicate that the story you are about to hear is if not completely false then at best wildly exaggerated.  These include but are not limited to, "I shit you not..." and "When I was in the cav..."

Link to comment
Share on other sites

Without the context of the exercise, I don't think can know how serious this was. It could simply be a poorly set up simulation. The fact that the decision engine somehow could be restrained against engaging targets but wasn't restrained from taking actions against the operator points to a scenario that wasn't very well thought out or geographically realistic in the first place. I would presume any UCAV operation would be geo fenced and that most commands would be given at extreme distances relative to the target. It sounds like there was no ROE in place and instead a points system established, with the AI intentionally allowed to go wild to achieve "points". But again, too little context to make much of it.

Link to comment
Share on other sites

Thought so:

Quote

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] 

https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/

Link to comment
Share on other sites

So in this form the story sounds quite less alarming: sounds like they just set up a paper scenario, "What if we gave combat AI very simple instructions and restrictions to operate within?" In this case, AI would get 'points' from killin' stuff, but human operator always could stop them. Also it was forbidden from killing the operator. So they asked around what could go wrong in this scenario, and somebody (probably only one who had ever read science fiction) pointed out that the AI could always blow up the control link, and then proceed to kill whatever the heck it wanted. Not sure why they needed a committee to figure that out, those ideas have been studied in literature for decades if not centuries before. Even Asimov came up with loopholes for his famed Three Laws.

That, or it was just a sanitized version and they really do have killer Reaper on the loose... :ninja:

Edited by Yama
Link to comment
Share on other sites

The speaker stated after the fact this was a thought experiment...take that as you will...

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".] 

Link to comment
Share on other sites

I want it to not be true. I fear if it is that the right people won’t learn the right lessons. 
 

What the Daily Wire has for sources is a question. 

Link to comment
Share on other sites

  • 3 weeks later...

This is cool: Avilus "Grille" MEDEVAC drone. Fully electric 240 kW drive permitting a range of 51 kilometers at up to 86 kph speed and 7,000 feet altitude with max take-off mass of 695 kg, of which 135 payload. Also includes a ballistic emergency parachute. Entire "DroneEvac" system with command module and support trailer fits into a 20-foot container and can be deployed by two soldiers in 15 minutes.

Grille-Rettungssystem-230607_AVILUS_Gril

Link to comment
Share on other sites

I assume it's electrically powered, so there's no reason for the blades to turn when it's on the ground. People would have to step back to allow it to take off.

As for size - assume that it will take a stretcher, so the pod needs to be about 2m long, which makes it maybe 1.5m tall?

Link to comment
Share on other sites

  • 2 months later...
28 minutes ago, sunday said:

Interesting comments in that article. Crew approaching work saturation?

leaves we wondering if we see the return of 2 seater's in the NGAD and Naval version for the same reason, with the intent to control drones and AI wingmen etc. seems like its getting to be a lot for a single person who also has to fly the plane...

Link to comment
Share on other sites

And suddenly I'm thinking of Arthur C Clarke's story about a remotely piloted vehicle crash.

The drone controller should not need to be a pilot. They should more like a shepherd controlling a sheepdog to then control the sheep.

Link to comment
Share on other sites

  • 3 weeks later...

TAIPEI (Taiwan News) — A Taiwanese drone maker has signed a deal to acquire 160 Turkish-built JACKAL attack drones.

GEOSAT Aerospace & Technology said on Sept. 16 that it had signed a memorandum of cooperation with the British firm Flyby Technology on Sept. 14. The deal covers the transfer of the JACKAL drone technology in the Asia-Pacific region and other areas.

Flyby Technology will provide its Taiwanese partner with payload solutions, testing and production planning for the new JACKAL drones, GEOSAT said. It will also provide other authorized Flyby Technology products.

...

https://www.taiwannews.com.tw/en/news/5008387

Link to comment
Share on other sites

https://www.telegraph.co.uk/news/2023/09/28/mohamad-al-bared-islamic-state-drone-bedroom-birmingham/

Quote

UK student built ‘Tomahawk missile’ drone for IS in his bedroom

Mohamad Al Bared found guilty of preparing for terrorist acts after trial in which he was described as ‘very dangerous’

An “explosive” drone built for Islamic State (IS) using a 3D printer has been seen for the first time after a student was found guilty of engaging with terrorists.

PhD student Mohamad Al Bared was handed a guilty verdict at Birmingham Crown Court on Thursday on a single count of engaging in conduct in preparation of terrorist acts to benefit a proscribed terrorist organisation. He was remanded in custody and told he may face a life term when he is sentenced on Nov 27.

The mechanical engineering graduate used a 3D printer to make the unmanned aerial vehicle (UAV), which was capable of delivering a bomb or chemical weapon for Isis terrorists, at his Coventry home.

The 27-year-old denied being a supporter of IS or its aims, having told jurors that he had no plans to assist it in any way and that he made a drone for his own research purposes. (...)

 

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...