rmgill Posted November 27, 2012 Posted November 27, 2012 Yup. I just wonder when they're going to do this just with optics and IR on a more basic form. It would seem that a kit plugged into a Optical Data Network should be able to gather data from multiple point sources in both IR and visible spectrum and show them to a crew. One has to wonder when the F35 technology will get down to AFVs in a basic form.
shep854 Posted November 27, 2012 Posted November 27, 2012 Even the ability to scroll or pan around using one screen linked to the cams would be helpful, and should be relatively cheap.
Mr King Posted November 27, 2012 Posted November 27, 2012 How about a Apache type helmet with single eye viewer for the commander and driver. So all they have to do to get situational awareness around the vehicle is to move their head in that direction. The commander could also use it to aim and fire his weapon.
Simon Tan Posted November 27, 2012 Posted November 27, 2012 Covered 10 years ago.......the idea of the transparent armor. This was back when VR was IN........and did not work.
JW Collins Posted November 27, 2012 Posted November 27, 2012 Well the technology involved has come quite a bit in the past 10 years. The prime example being the F-35, or even the variety of sensors designed to detect missile launches, laser designation, and other threats. I don't believe it is yet robust enough for AFV use, but give it another decade and it may. You could probably achieve a lesser level of such capability sooner if you use displays within the vehicle as opposed to a helmet mounted system.
Mr King Posted November 27, 2012 Posted November 27, 2012 Covered 10 years ago.......the idea of the transparent armor. This was back when VR was IN........and did not work. Out of curiosity any particular reason it did not work, or was it a whole host of reasons?
rmgill Posted November 27, 2012 Posted November 27, 2012 I'll bet it was processing speed and bandwidth for data transfer between modules.
shep854 Posted November 27, 2012 Posted November 27, 2012 Mud got on the sensors...... GEN. Mud's a sneaky SOB, ain't he?
rmgill Posted November 27, 2012 Posted November 27, 2012 How do you keep mud off any other sensors like a CITV or a GPS? Seems like a poor failure mode to attribute to a distributed system.
Guest Jason L Posted November 27, 2012 Posted November 27, 2012 How do you keep mud off any other sensors like a CITV or a GPS? Seems like a poor failure mode to attribute to a distributed system. I was being facetious Simon was already talking about the general fad for augmented reality/data presentation with the whole net-centric battlefield shtick. The real issue is that they don't know how the hell to design an overlayed interface that didn't cognitively overload the user. The classic problem with all of these augmented reality systems is that you lose the ability to do your job, because simply processing all of the info takes over the job. There were a ton of papers and studies being put out in the 1995-2005 timeframe on panospheric viewing systems, overlaying data and all of that stuff and then it just died off. My view is that the funding fad wore out (these things tend to come in cycles and only last so long before attention spans run out) without yielding some sort of really stallar setup that was sufficient to overcome funding limitations and organizational inertia. I suspect the topic will be revisited now that AR is more or less arriving in the real world and informational streaming, interface design and similar is becoming a formal technical discipline.
rmgill Posted November 27, 2012 Posted November 27, 2012 (edited) Most of the systems that overload the user are due to bad interface. A good system should show you as much as you want or need and be configurable for better data formats or changes as the users prefers. It has to be integrated, and I suspect that's where these systems fail. Ideally instead of trying to see out of various periscopes, manage a CROWs mount, turn to look at radios, run BFT/FBCB2 and translate that map overlay into what you are seeing through various viewing devices, the system should overlay that onto your field of view (on top of fighting your tank etc). Want terrain to be transparent then turn that filter on or off. Want to not see blue units bracketed in your view or just see carets for them, select that option. Ideally this would be on a viewer OR later a helmet viewing device. New systems should reduce workload by presenting data in clear and concise formats, not increase workload. If the workload is increased over what crews already do, the interface design folks have failed, utterly. It's like a weapons designer making a bigger newer gun that doesn't do anything better than the old one but is more intensive to operate. Networked data is useful in this especially if you're taking feeds from helo's and drones and the like on where red forces are. BUT, it has to not overwhelm the commander and fill his view with carets and the like. Target priorities and selectable and controllable details are critical. Some of this takes some really heavy lifting by the systems designers to make it work properly and filter out shit you don't need to see but at the same time make sure the shit that's about to kill you show up. Things that are a threat should be called out regardless of your selected options/filters, like getting lased, getting painted with radar, having an ATGM launch signature near by, or even closing acoustic signatures of red forces vehicles should be a threat signature (if we can make off route EFP mines that use acoustic signature to select targets we should be able to build that into a threat analysis system). There's a very arcane field of hardware, application and OS design that focuses JUST on interface. Apple (mostly) had this pretty well figured out for years but other companies have looked at the aspects and taken it up. Smart auto makers started looking at this years ago too. Some folks stumble through the principles but fail. I've done some of my own interface workings but I'm effectively blind because as the data base creator, I have the advantage of understanding how to get what I want, it takes an authoritative interface designer to just do the visual/operator ergonomics in order to make things really work properly OR you have to have a systems designer who groks the interface issues. I think one of the things that gives this a bad name is that it attempts to do more with less (like with FCS) instead of augmenting current capabilities and giving those crews MORE information they need and helping to filter out the things they don't need. Because it's all gee-whizz you have brass wanting to piggy back backseat warfighting into the system and that's not helpful at all. Edited November 27, 2012 by rmgill
rmgill Posted November 27, 2012 Posted November 27, 2012 (edited) When I point to processing speed, I'm contemplating the processor load of the interferometry system and the requirement that it be real time and not be delayed at all. It's one thing, 15 years ago to be doing long base interferometry on a astronomical study format where you can gather data and then set it in computers to process for a few days. It's another thing to have to rely on this for life and death information. We're seeing encoding cards/systems at work that are able to do full MPG transcoding at faster than real time (no back log and can process a 1 hour video in less than 1 hour if you're working with something delayed) in larger video formats. The real big high def stuff still takes a bunch of horsepower but we're talking $5000 video cards instead of $150,000 encoding machines from the specialty broadcast video hardware vendors. I would expect that high-res video interferometry would be similar to transcoding lossy or lossless video compression (I could be wrong here no doubt). For data transfer rates, we're seeing 10 gigabit interfaces on a lot of devices and the media converters coming down to reasonable prices for the hardware. If you're pushing it over short fiber distances (not miles but hundreds of feet) even the name brand transceivers are sub $1000 now. This means more hardware is available for adaptation in a systems function format but not necessarily ruggedized format. We're about to see 10 gig copper LOMs ( Lan On Motherboard) in great profusion at work, thought they've been out for a while. Edited November 27, 2012 by rmgill
Rickard N Posted November 27, 2012 Posted November 27, 2012 I have a feeling that a head mounted sight in a tank will have a slight disadvantage when the gunner is firing at 9 o'clock you wouldn't be able to search for other targets even in the 12 o'clock area, let aloner further to the right. I don't know if this is a problem but it might be. /R
Guest Jason L Posted November 27, 2012 Posted November 27, 2012 I think it is fundamentally impossible to increase informational load without effectively increasing work load. Simply adding a trillion different overlay options is, in and of itself, functionally increasing the work load. Optimally setting the overlays becomes part of the job. And that is a problem no interface design can fix. Also, thermography has always been more accessible than interferometry. The latter is still a bitch to process. I can't think of any battlefield system, other than absurdly far out weapons, that hinge in interferometry. Maybe there are some esoteric applications for InSAR?
mnm Posted November 27, 2012 Posted November 27, 2012 How much does this bell-and-whistlery add to the price tag?
bojan Posted November 27, 2012 Posted November 27, 2012 As for processing power, MiG-29 (late '70s tech) has enough for it's HMS. Combined with IRST it can cover some of "blind" angles as well.
Rick Posted December 2, 2012 Author Posted December 2, 2012 And the use of the commander's mg before the use of uber sights?
ikalugin Posted December 2, 2012 Posted December 2, 2012 Well, about Commander's MG, T64, T90 both could be fired from below the armor.
JW Collins Posted December 2, 2012 Posted December 2, 2012 Well, about Commander's MG, T64, T90 both could be fired from below the armor. I imagine this was done in a similar manner to the setup of the M1 and M1A1? Did the T-72 and T-80 lack this feature?
bojan Posted December 2, 2012 Posted December 2, 2012 T-80UD had it, other T-80 versions did not IIRC. T-90 has it.
Damian Posted December 2, 2012 Posted December 2, 2012 T-64, T-64R, T-72, T-72A (their export variants), T-72B, T-72B1, T-80, T-80B, T-80BV, T-80BK, T-80BVK had manually operated machine gun for commander, or no machine gun for him at all. T-64A, T-64B, T-64BV, T-64BM "Bulat", T-80A, T-80UD, T-84, T-90, T-90A and Object 187 have very similiar powered cupola with capability to operate commander machine gun from under armor.
DT Posted December 17, 2012 Posted December 17, 2012 Dumbing the post down a bit here. I noticed on a Leopard 1A5 kit I am building it has a skate mounting allowing the crew to put the MG3 in either the TC or loaders position. My question is in the German Army did the TC ever mount this at his location and was it typical on this version? I have tried Googling images of West German ones in use but the ones I have seen depict the MG3 not mounted in either position.
Loopycrank Posted December 17, 2012 Posted December 17, 2012 So, given that statistical studies have shown that the higher parts of the tank tend to get hit more, how often do machine guns and rangefinders and whatnot get shot off the roof?
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now