Jump to content

Flight Tj610 Crashed In The Sea.


JasonJ

Recommended Posts

"Can't you fix it in wetware?"

 

Unresolved critical software failure modes would, in my opinion, cast a significant doubt about a company's ability to make a design assurance integrity argument, but hey, I don't have to get FMS certified, so it's easy to criticise.

Link to comment
Share on other sites

  • Replies 406
  • Created
  • Last Reply

Top Posters In This Topic

"Can't you fix it in wetware?"

 

Unresolved critical software failure modes would, in my opinion, cast a significant doubt about a company's ability to make a design assurance integrity argument, but hey, I don't have to get FMS certified, so it's easy to criticise.

 

Well, it's kind of like trying to design a plane so that it won't ever fly into mountains. You can, but should you have to?

 

Regardless, the vendor here has had their second stop ship on a major nav function in less than a year. Rumor going around is the agencies are getting ready to do a huge collective software audit.

Link to comment
Share on other sites

Just got an email from a colleague of mine who is a software configuration specialist, sorry for the long post.

"Excerpt from Flight Safety Information Daily and book by Captain Shem Malmquist – interesting reading. In these overlapping critical systems is DAL A enough? Will ARP compliance become the pre-requisite for the implementation of new avionics suites on older, in-service aircraft?

A NEW APPROACH TO FLIGHT AUTOMATION

By Captain Shem Malmquist

Every new jet relies heavily on state of the art computerized systems. Automation has been the "name of the game" for several decades now as designers layer on new software and hardware. Pilots are now accustomed to operating aircraft that contain flight automation that manages nearly every aspect of flight. This has led to well known phrases such as "the children of the magenta line," referring to pilots who are focused on just following the automation, and more technical terms such as "automation dependency" and similar concepts. We all know what these terms mean but are they accurate?

Are pilots handicapped by a lack of basic stick and rudder skills or is something else afoot? Certainly, nobody would argue that stick and rudder skills do not become weaker as automation ramps up, but is that really the problem that is leading "automation dependent" pilots to loss of control events?

As a training pilot and accident investigator I do not see pilots that are unable to fly the airplane. What I do see is pilots that keep messing with the automation in an attempt to "fix it" until it does what they want (hopefully). It could be that they realize they grabbed the wrong knob, for example, the airspeed instead of the heading, in response to an ATC clearance, or, for whatever reason, the autopilot was not intercepting an altitude that was set. In the process they get distracted, lose focus and end up in unexpected scenarios. The key here is really ensuring strict discipline. The pilot-flying must focus on flying the airplane. The pilot-monitoring must ensure the pilot-flying is doing what they are supposed to do. The pilot-monitoring is, literally, the "control" (to use system theory parlance) for the pilot-flying. My suggestion would be to teach both pilots to focus only on the aircraft path during any dynamic situation. Any changes to configuration, turns, initial climb and descents, altitude capture, etc., should involve both pilots focusing on flight instruments. Further, if something is not going as expected, immediately degrade the automation to a point where both pilots know with certainty what it will be doing next. That may involve turning off all automation.

That brings us to the second problem no one is talking about. As we've learned from the two Boeing 737 Max accidents, pilots must have a fundamental idea of what computers are really doing, and what they are not doing. We tend to think of computers integrated into our aircraft as another hardware component. Either it does the job, or it has failed. Unlike an analog braking system or an altimeter, computers are fundamentally different

Attempts to personalize computers with such terms as "machine intelligence" misses the point. Computers are not living, but they do interact with the world around them as they were designed to do. They are hooked up to sensors to "sense" the factors that the people who designed them deemed important and react the way the designers enabled and designed them to do. The computer might, therefore, be missing vital information critical to an alarming scenario simply because beyond the designer's imagination.

Assuming that the data is all being collected as designed, it flows into the computer which uses a "process model" to decide what actions to take. The programmer has attached the computer output to various systems the computer is, literally, controlling. Unlike a living organism, a computer is totally unable to deviate from its programming. It cannot come up with a new or novel solution, it just simply follows its instructions. "I know xyz" and based on values of xyz I perform abc and that is all. There is no nuance here. Depending on the challenge at hand this can be useful or it can create new problems.

One way problems can begin is when the data is accurate but the computer process model is flawed. This can be analogous to a person being trained to do the wrong thing. An example might be a person who is not following a checklist because they had an instructor insist that they followed that instructor's "technique" instead.

Problems can also begin when the data coming in is flawed. Here the computer does exactly what it was programmed to do, no more, no less. If the designer anticipated the exact data problem the computer should do something rational. For instance, it might stop all further actions and alert the pilot that it cannot do its assigned job. However, if the designer did not anticipate a problem then there is no way for the pilot, in real time, to be certain of what the computer might now do. Yes, any person with software knowledge could look at the code and the data and tell you what it might do in that scenario. That doesn't help when an new scenario is suddenly discovered mid-flight.

Now recall the data inputs on an aircraft system, say a flight control computer. It needs airspeed, angle of attack, flight control positions, flap positions, it might need CG, g-loading (Nz), mach numbers and more. It takes that information in, and then, based on what commands are given to it by the pilot (inputs), runs it through a process model and then out to the control surfaces, engines and other items it might managing. These can include elevators, ailerons, spoilers, rudders, flaps, slats, trim, etc. The output is dependent on the input coupled with the programming, which becomes the process model. OK, hold that thought for a moment.

What is your procedure if something happens on takeoff that is not in the books? No QRH, or at least, no immediate action items. Let's say a sensor failure, a loss of angle of attack, a loss of the g-force or even the inability of the computer to read a flight control position. You have some sort of fault indication (maybe) right after V1, what would you do? Most training programs would have you to continue the takeoff, positive rate, gear up, get to a safe altitude, clean it up, then troubleshoot (maybe), or perhaps just continue to the destination.

All nice, except for one little problem. Remember that computer process model? The computer's process model is now flawed due to bad data. The computer is unable to "know" the correct actions. Changing any aspect might result in an unexpected outcome as the computer mixes the new details of your changes with bad data. All that goes into the software for a new "decision" on what output to perform. This is, in a nutshell, what occurred on the Max accidents. Pilots retracted the flaps and BANG, MCAS was activated. A change was coupled with a bad input in a scenario not anticipated in the design. The rest is now aviation history.

MCAS is not the only "gremlin" out there like this. There are others lurking. For example, something as seemingly innocuous as the pilot giving the computer commands in a way that was not anticipated by the designer can yield unexpected outcomes. So what can a pilot do? One suggestion is to consider the way computers work. If the airplane is flying ok, maybe it is worth considering not changing anything. No change to the configuration, no change to anything that is within the pilot control. Just keep it flying, and when any changes are made, be prepared for an unexpected outcome.

In a traditional legacy airplane this was no problem. Changing the flaps or the landing gear would be unlikely to entirely change the way the airplane handled. Changing the flaps would not suddenly trigger secondary systems in unexpected ways. That is no longer true. There is simply no way to train pilots to understand all the possible ways that every system might react to every circumstance. Arguably (and based on what we have seen) even the designers might not have considered all these possibilities. So I would argue that our procedures are not keeping up with the changes to our aircraft architecture. Until they get caught up, pilots need to have a much better understanding of how computers work and how they interact with the world around them."

The ARP reference is to ARP-4754A which is a guiding process that validates all technical requirement definitions of the aircraft components through testing, design analysis, and/or demonstration. The requirement validation exercise then becomes the basis for stating the DO-178 (Software)/DO-254 (Hardware) DAL (Design Assurance Level) are valid and consistent with the System Safety Analysis that now has hard data to back it up (rather than purely statistical data). It makes the agencies happy also and certification gets done way more quickly.

Link to comment
Share on other sites

I remember reading that British Airway's (and presumably other airlines too) choose at the start of their career whether they want to be an Airbus pilot or a Boeing pilot, just so there is some kind of familiarity with the kind of different methods both companies setup their automation. Thinking back to the early days of Airbus there was continual problems with landing modes and rate of decent selection, simply because it was completely different to how the Americans, or anyone, had done it before.

 

To me it would make perfect sense to agree a set of principles that all manufacturers would try to agree to, or at least go out their way to flag if they do it differently. For example, there should be no reason why a different mode is selected by selecting flaps, without screaming it out to the pilot. I question whether it really should do it at all without the pilot explicitly selecting that mode. Even if it makes the operation more difficult, at least it means a step change he has to become familar with before it operates.

 

There was an excellent book I read in the school library called 'The Human Factor in Aircraft accidents'. There was a boeing stratocruiser crash where the pilots, facted with a dozen similar controls, selected the wrong one and the cowl flaps didnt close, and the aircraft crashed into the sea (or something like that). It took a decision by the pilots to modify some of the controls but putting gaskets or socks on them to show they were different, so they would always pick the right one. They shouldnt have had to, the designers should have recognised humans arent machines. Not necessarily automate the action (im not sure it was even possible then), just envisage humans can make mistakes, and design the system to give the pilot as many cues as possible. Thats possible to understand in the 1940's. Here we are over 100 years on from the birth of flight, and designers are STILL making similar mistakes.

Link to comment
Share on other sites

I remember reading that British Airway's (and presumably other airlines too) choose at the start of their career whether they want to be an Airbus pilot or a Boeing pilot, just so there is some kind of familiarity with the kind of different methods both companies setup their automation. Thinking back to the early days of Airbus there was continual problems with landing modes and rate of decent selection, simply because it was completely different to how the Americans, or anyone, had done it before.

 

To me it would make perfect sense to agree a set of principles that all manufacturers would try to agree to, or at least go out their way to flag if they do it differently. For example, there should be no reason why a different mode is selected by selecting flaps, without screaming it out to the pilot. I question whether it really should do it at all without the pilot explicitly selecting that mode. Even if it makes the operation more difficult, at least it means a step change he has to become familar with before it operates.

 

There was an excellent book I read in the school library called 'The Human Factor in Aircraft accidents'. There was a boeing stratocruiser crash where the pilots, facted with a dozen similar controls, selected the wrong one and the cowl flaps didnt close, and the aircraft crashed into the sea (or something like that). It took a decision by the pilots to modify some of the controls but putting gaskets or socks on them to show they were different, so they would always pick the right one. They shouldnt have had to, the designers should have recognised humans arent machines. Not necessarily automate the action (im not sure it was even possible then), just envisage humans can make mistakes, and design the system to give the pilot as many cues as possible. Thats possible to understand in the 1940's. Here we are over 100 years on from the birth of flight, and designers are STILL making similar mistakes.

Well, yea that's the thing. The design philosophies being implemented are based off of the newest data, when previous designs are using older data.

 

The primary design philosophy of aircraft design that Boeing adheres to is pilot-in-the-loop. The pilot is continuously tasked with staying in the flight process, and they have to be involved and capable of making the correct decisions for the aircraft to operate safely.

 

The primary design philosphy of aircraft design that Airbus adheres to, is automation-in-the-loop. The automation when designed correctly is almost infallible. The only time the pilot will be actively involved is when aircraft configuration needs to change, or the flight plan changes due to unforeseen circumstances. This offloads the crew a lot. The crew still have things to do, but they are minor and are more like baby sitting the automation and cross checking the data being displayed.

 

So, what's the benefit?

 

When pilot quality is high, and the design is good, the Boeing system works well. (MAX is a very bad design, hence the possibility of malfeasance in claiming safe design). When the design is bad and pilot quality is adequate, that may not be enough for safe operation as the MAX incidents show.

 

When pilot quality is adequate, and the design is good, the Airbus system works well. However, when you have system upsets outside of the System Safety Criteria, such as a triple ADS failure on Air France Flight 447, you have crew trying to cope with an unforeseen systemic failure (not trained for) that would be almost impossible to overcome. It's hard to deal with sudden loss of all speed data and attitude data at night with fault indications going off continuously no matter the circumstances. If the Air Data System was rated DAL B, for a triple ADS installation, the possibility would be considered at 1 in 1*10^18 flight hours, and is not required to be designed for even with catastrophic impact, and no mitigation would be necessary. That was a flaw in ADS design and SSA assumption. The Inertial systems could have kept displaying attitude data since they know their orientation, and they also have an intertial altitude/airspeed output, so that was available but the ASI (aircraft situation indicator) was designed to show loss of aircraft attitude data in those circumstances.

 

Really, in my opinion, a good design would have the capability to cope with multiple failures, and in the worst case, give aircrew any valid data available so they can use their own judgement and guide the aircraft as best they can to a resolution with survivors on the ground. There's no reason either design philosophy should lead to a worst case outcome.

Link to comment
Share on other sites

The preliminary report for the PIA crash is out.

 

It looks like the aircraft was perfectly serviceable and ATC did at least initially the right things. The crew most definitely did not.

 

They never got close to a stabilised approach, they went over a waypoint 15 miles out at nearly 10000ft when they should have been at 3000, they never got near approach speeds, they set the landing gear at 240kt, selected flap settings that were not allowed for their airspeed, retracted the landing gear, touched down, applied reverse thrust and then went around.

 

The FDR quit when they lost all power. It's also possible the CVR stopped at the same time, but it definitely recorded all of the audible warnings they received, and their ignoring two ATC requests for them to orbit to lose height and speed. There is no transcript yet.

 

The gear was deployed normally at the time of impact, so there is strong evidence that it was working normally.

 

The pilot who was reading through the report was having a hard time comprehending the level of upfuckedness involved.

Link to comment
Share on other sites

  • 1 month later...

How to end your career without causing any injuries in less than 3 hours.

 

https://www.youtube.com/watch?v=0ga8UFy1M04

 

And it doesn't have to be a 3rd world airline for this to happen.

 

An interesting thought - if you were a passenger who was aware of the circumstances (say a pilot who knew that the engine had failed), what would you do to influence the decision of the pilot to continue.

 

I'm thinking worst case would be to kick off like a drunk and get teh flight diverted for security reasons, but that's going to fuck over your own career.

Link to comment
Share on other sites

  • 2 months later...
  • 4 weeks later...
On 8/2/2020 at 7:53 AM, DB said:

An interesting thought - if you were a passenger who was aware of the circumstances (say a pilot who knew that the engine had failed), what would you do to influence the decision of the pilot to continue.

 

Was on a flight from Daytona to Toronto (B737 iirc) and the starboard engine started spewing a liquid over Appalachia. Notified the flight attendant who then told the flight crew. She came back and said it was fuel but to me looked like oil. The flight continued to Toronto. The pilots had a hard look at the nacelle after landing.

Wouldn't fuel ignite?

 

Link to comment
Share on other sites

17 hours ago, MiloMorai said:

Was on a flight from Daytona to Toronto (B737 iirc) and the starboard engine started spewing a liquid over Appalachia. Notified the flight attendant who then told the flight crew. She came back and said it was fuel but to me looked like oil. The flight continued to Toronto. The pilots had a hard look at the nacelle after landing.

Wouldn't fuel ignite?

 

It's kerosene (assuming you were in a jet!), so doesn't ignite particularly easily.

Also, the flight crew would be able to tell if it was leaking significantly because their fuel flows would be off. An oil loss would probably show on telemetry, too, as a loss of oil pressure.

It's the correct thing to do to tell a flight attendant about something odd you've seen, of course. A good flight crew will consider your observations if something starts going pear-shaped later on, even if there is nothing happening right then.

Link to comment
Share on other sites

2 hours ago, Stuart Galbraith said:

There was an Aloha Airlines flight where one of the passengers noticed a crack on the side of the fuselage, but didnt tell the crew about it, assuming they already knew. About an hour later the roof of the aircraft lifted off.

Yes, it pays to be a bit paranoid.

I remember when that happened, sucked a poor flight attendant out. Quite a sight seeing a whole fuselage section missing.

Link to comment
Share on other sites

27 minutes ago, Jeff said:

I remember when that happened, sucked a poor flight attendant out. Quite a sight seeing a whole fuselage section missing.

Betty Ong If I recall correctly. The good thing about it is that there was so much blood on the wreckage, she likely died instantly.

it was doubly unfortunate. According to one book I read that featured the incident, it was the body of the unfortunate attendant that blocked the hole briefly. If it had not, the cabin would have vented, and the roof may have remained on. As it was, the pluging of the hole created an overpressure, and spread the failure a long the roof. And that is when the roof came off. The fuselage actually bent, so people say at the back thought the cockpit had gone...

 

This incident wasn't fatal, but you absolutely would not believe the cause.

 

Link to comment
Share on other sites

5 hours ago, DB said:

It's kerosene (assuming you were in a jet!), so doesn't ignite particularly easily.

Also, the flight crew would be able to tell if it was leaking significantly because their fuel flows would be off. An oil loss would probably show on telemetry, too, as a loss of oil pressure.

It's the correct thing to do to tell a flight attendant about something odd you've seen, of course. A good flight crew will consider your observations if something starts going pear-shaped later on, even if there is nothing happening right then.

Boeing 737, so a jet.

From just past half way back on the part of I could see, the nacelle was covered with the fluid. The color was a brownish black.

Link to comment
Share on other sites

2 hours ago, MiloMorai said:

Boeing 737, so a jet.

From just past half way back on the part of I could see, the nacelle was covered with the fluid. The color was a brownish black.

Well, the spare was OK, so there was no problem :D

 

Link to comment
Share on other sites

  • 1 month later...
Quote

Date 29.12.2020

Boeing 737 MAX returns to US skies for first time in almost two years

The American Airlines flight was the first commercial flight in the US since the Max was grounded in March 2019. Two plane crashes in Ethiopia and Indonesia meant authorities barred the plane from taking to the skies.

A Boeing 737 Max bound for New York took off from Miami International Airport on Tuesday with some 100 passengers on board for the aircraft's first US commercial flight since faulty sensor readings led to two crashes in 2018 and 2019 respectively.

"This aircraft is ready to go," American Airlines President Robert Isom said at a press conference ahead of the flight.

Nevertheless, the airline gave passengers the chance to switch flights if they felt uncomfortable traveling with the Max.

[...]

The first passenger flight with a revamped Max occurred earlier this month in Brazil, with Gol airlines operating more than 540 flights and Aeromexico more than 80 since the restart, according to aviation tracking utility Flightradar24.

https://www.dw.com/en/boeing-737-max-returns-to-us-skies-for-first-time-in-almost-two-years/a-56087752

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...