Jump to content

Night Vision Sight Question.


Recommended Posts

OK, here's something I can't get my head around - add on image intensifiers or thermal imagers that attach to a Picatinny rail in front of a day scope.

 

The picture generated is synthetic - it is not a direct optical path. It could be out of alignment with the by scope by degrees and you wouldn't notice. Yet some of these scopes are billed as simply bolt onto the rail in front of a zeroed scope and you're ready to shoot. I just can't see how that could ever work out. What am I missing here?

 

This is one such system: https://tnvc.com/shop/uns-anpvs-22-universal-night-sight/

Link to post
Share on other sites

The tubes in II amplify light in a direct optical path they take light in from a Objective lens pass it through a II tube which amplifies the light which strikes a phosphor coated screen at the back of the II tube which produces a picture. Placing a Image Intensifier in front of sight would work as a Image intensifier is a direct optical device the picture is not synthetic like a Thermal Imager where the image is collected and processed then redisplayed on a screen.

Edited by Wobbly Head
Link to post
Share on other sites

There was some kind of II / IR site where the II gave a real time view because it was analog photo multiplier tube but the IR 'lagged'. Quickly turning your head made the green screen move in real time but the IR images stayed momentarily from the previous frame until the digital rendering caught up. But the advantage was you could see heat of engines or even peoples hand prints on a wall from a couple of minutes ago overlaid on the regular light amp view. Looked like it was pretty cool for close quarters COIN. Processor speeds will eventually get to the point where the whole thing is digital and presented to the user in real time.

Link to post
Share on other sites

I guess the best way of explaining what I mean is that the TV in my living room is the equivalent of the output screen of a TI add on unit. If I'm watching live coverage of the cricket and shoot at the screen, even if I perfectly align the rifle it, it's far more likely that the bullet will end up disintegrated and embedded in breeze block wall in Orkney than in an England batsman at the Oval.

Link to post
Share on other sites

If the output screen is put exactly in line with the picture for example if some one was to place a tv in a window then placed a camara exactly behind the tv taking a picture of exactly what is behind the tv it would look exactly like there was only a shape of the tv frame blocking the view from the window. as for shooting poor English cricketers I don't think the US army has enough ammo in its invintory to accomplish that task.

Link to post
Share on other sites

This is the crux of it. Other than in the case of a direct optical path, the positioning of the sight would have to be achieved with incredible accuracy to enable you to simply clamp it onto the rail and shoot. Presumably some kind of adjustment mechanism would surely need to be built in.

 

Late edit: Though the adjustment would need to be internal if the thermal unit attached directly to the scope, like this one:

 

http://www.scottcountry.co.uk/products-Pulsar-Core-FXD50-Front-Mounted-Thermal-Add-On-6022.htm

Link to post
Share on other sites

It is a direct optical path. The input photo cathode and output phospor screen are transposing photons for electrons for photons along the same path of travel. It would be the same as sticking another lens set in front of a scope.

 

Its sort of working like a vacuum tube that youre looking down what with the screen in the middle that takes in electrons and makes more emit on the far side along the same path as the originating electron.

Edited by rmgill
Link to post
Share on other sites

I that the direct optical path that Chris mean is something like the device described there:

 

https://docs.google.com/viewer?url=patentimages.storage.googleapis.com/pdfs/US6204961.pdf

 

i.e. a system that superimposes the thermal / intensified image into the view in the same manner as a reticle would be inserted.

 

Otherwise, you will be aiming at a picture on a screen, and that screen could have all possible errors of alignment, parallax, etc., different from the ones of the sight.

Link to post
Share on other sites

Sort of. The 'screen' is no more than really a lens that is emitting light in stead of electrons that strike the back side. Alignment is critical, probably why they cost so much.


This describes the various differences in Gen 0 through 3rd Gen tubes.

http://www.photonics.com/EDU/Handbook.aspx?AID=25144

Link to post
Share on other sites

Thinking about it, there could be a technical trick that could make an image intensifier unable to have alignment error. Using some well designed optical system in the back end, just before the day scope, one could project an optical image of the image that appears in the plane of the photocathode, i.e. the component that receives the ambient light to be amplified.

 

Googling a bit the relevant words, I found this patent.

 

In an alternate arrangement, the lens can be coupled to the output thread coupling of the housing using an adapter having one end adapted to be coupled to the housing, and another end adapted to be coupled to an image capturing device, such as a CCD video camera, CCD camera, film camera, digital camera, or other image capturing device. In this embodiment, the lens is selectively positionable along a portion of the length of the adapter, thereby varying the conjugate ratio so that a virtual image of selectably variable magnification is formed on the focal plane of the image capturing device.

In this manner, the present invention permits viewing the input source, optically intensified through the intensifier, as a virtual image through the lens using the human eye, or as a real image through the lens viewable through an image capturing device, depending on whether the lens is attached directly to the image intensifier or is attached to the image intensifier through the adapter.

 

This other patent seems related.

Edited by sunday
Link to post
Share on other sites

This is why you have resolution issues at high magnification. Putting variable magnification optics into a II scope will run into the same problem. As long as your reticule in on your day scope and zeroed, as long as the FOV of the image intensifier is wide enough, it doesn't matter. This is why you have return to zero mounts for both.

Link to post
Share on other sites

Do they really have a pixel-based display, though? The early II sights were just a phosphor screen with no mask and so the resolution was essentially the spot-beam size of the cascaded electrons. I have no idea how they work now, though.

 

Also, you will likely need an optical element at the rear of the II/IR sight to adjust the perceived focal distance so that the magnifying sight behind has an image to focus on. Trying to focus on a screen 2 inches or less in front of the objective lens requires a microscope, not a telescope.

Link to post
Share on other sites

Do they really have a pixel-based display, though? The early II sights were just a phosphor screen with no mask and so the resolution was essentially the spot-beam size of the cascaded electrons. I have no idea how they work now, though.

 

Also, you will likely need an optical element at the rear of the II/IR sight to adjust the perceived focal distance so that the magnifying sight behind has an image to focus on. Trying to focus on a screen 2 inches or less in front of the objective lens requires a microscope, not a telescope.

 

Yes, they use a bunch of very thin optical fibers to erect the image. It needs less space than an erecting lens. One should not forget the microchannel plate, either.

 

Apart from that, the optical element at the rear, let's say the ocular, plays the same role as the projecting lens in an aircraft HUD.

 

However, perhaps the ocular used to focus the image to infinity could also be used to erect the image.

Link to post
Share on other sites

 

We already established that earlier in the thread. We're now talking about thermal imagers which (AFAIK) are not direct optical paths. Sunday did a good job of explaining it.

Link to post
Share on other sites
  • 5 years later...

Upgrades to the US Army’s night vision technology make darkness into a video game

Quote

The US Army’s Lancer Brigade showed footage of the world as seen through the latest in night vision technology, and it’s a major improvement on the familiar blurry green visuals. By swapping the standard green tubes for white (along with other tweaks), the new Enhanced Night Vision Goggle-Binoculars (ENVG-B) clearly show people and objects outlined in a glowing light, almost like a video game objective.

Is it AI driven? Then funny things come.

Quote
New Equipment! Enhanced Night Vision Goggle-Binoculars continue the @USArmy ’s effort to #modernize our fighting force! You have never seen night vision like this! #readynow #QuietProfessionals

 

Link to post
Share on other sites
On 5/7/2021 at 3:08 PM, DB said:

It could be as simple as bi-directional edge detection, which requires only a simple convolution algorithm.

Yep. I worked for a doctor in '94 who developed an edge detection algorithm that ran in a script that we could dump into 4D or other apps to edge detect coronary arteries on 90s era macs. The function was for getting hard numbers on blockages on arteriosclerosis sites using the edge detection script to measure a known cross section catheter as a reference dimension and then draw two lines across the artery. One line was across the un-occluded section, the other was on the occluded section. I'm sure what ever he had done for simple edge detection could be replicated similarly and then run as a function on a simple ASIC in some NVGs. 

I'm pretty sure that the above edge detection is also not unlike some functions I've seen for both Photoshop and for an import function in Freehand. 

Edited by rmgill
Link to post
Share on other sites

Yes, you typically use a 3*3 convolution for simple edge detection, but you can fatten the line up by various means. As far as processing power goes, this is almost free nowadays, especially as they're likely to be doing brightness and contrast manipulation anyway.

Link to post
Share on other sites
On 5/7/2021 at 9:08 PM, DB said:

It could be as simple as bi-directional edge detection, which requires only a simple convolution algorithm.

Simple and convolution in the same sentence!!!!

(No, transforms was NOT my favourite subject)

/R

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...