Jump to content
  • Join the online East Midlands astronomy club today!

    With active forums, two dark sites and a knowledgeable membership, East Midlands Stargazers has something for everyone.

Lights, Darks, Flats, Bias as I understand it.


Perkil8r

Recommended Posts

Following on from a lengthy shoutbox session last night I thought it appropriate to add this topic.

Whislt I am certainly no expert in any way shape or form, and have very little experience of imaging so far, I have come to understand the principles involved. This thread hopefully will cover what each of the 4 types of data is and how it is gathered and then what it is used for. I can't cover techniques as I have little or no experience so far, but hopefully some of those more qualified to advise or comment will help out here and fill in with the relevant information.

Lights - These are the very minimum of data we need to create an image. They are the images that we take of the object, be that a planet, galaxy etc. With nothing else added we will get a picture to look at. All of the other types of data serve only to enhance the light frames we take by removing various noise from the resulting image. In effect we are capturing the light that we want to see, or at least that is one way to remember what they are called and why I guess.

To get a better image, we stack a number of light frames to combine them and get a single stacked light frame. Each light frame will capture slightly different data, even if it is at exactly the same exposure length and ISO setting. If we split the image into numbers for a minute from 1 - 100, in the first frame we take we might get 50 of those numbers meaning we are missing some of the picture. The next time we take the same picture we might get 30 of the same numbers but 20 new ones, so now we have collected a total of 70 individual numbers. To get the full picture we need to get as many of those numbers as we can, so the more times we snatch some new numbers we get a little bit more of the full set. This is why we can't cheat and take one frame, then copy it and stack the 2 together, all we do is get the same numbers repeated, we don't get any new ones. The numbers are actually photons, which make up the picture we see. Not all photons are visible at the same time, some might reach us more frequently than others, but we won't see all of the ones required to make up the full picture at the same time. Hence we need many frames to stack into a single image so we can get as much detail as possible.

Darks - These are what the camera will produce without any light hitting the sensor. It is in effect a picture of what the camera makes us see in the dark. It is noise that the sensor produces, that noise is refered to as hot pixels. The sensor is made up of lots of individual parts or pixels, just like your monitor. Each point of light lights up a pixel, this is then turned into a signal which is then compiled into a picture later. Some of these pixels produce false readings. By taking a picture with no light we get a snap shot of what the sensor is effectively making up on it's own.

The darks need to be done at the same temperature as the "Light" data, and with the same camera settings, but with no light hitting the sensor. For this we would generally put the lens cap over the telescope and take a number of dark frames. Some people say you should take as many dark frames as you do light. That is for discussion by those in the know, not by me.

Stacking darks with lights takes away the sensor noise or "hot pixels". These hot pixels might look like pin point, oddly coloured stars in the original light frames you have taken perhaps. So by adding dark frames to the light frames in the stack, removes the background noise from the sensor. Again, more is better than a couple as it gives the image software more opportunities to find all the noise than if we add just one.

Flats - These are what the camera picks up from the optical train, such as dust and other optical imperfactions. How to take these is a huge subject, but the basics are we need to recreate the exact circumstances that we use to capture the light data, so that any imperfections can be removed from our stacked image. They are images that are flat. There is nothing interesting for the camera to pick up, just a blank flat image, a bit like staring at a blank sheet of paper.

Generally we use a light coloured (normally white), evenly illuminated area. It is important that nothing at all alters from when you take the light frames. Focus, rotation of the camera in relation to the focuser tube etc are critical as we are trying to capture exactly what the optics are producing that might affect our overall image. Such imperfections will be dust on a lens, mirror or sensor, also vignetting (where the middle of an image looks to be lighter than the outside). All of this will be picked up by taking flats, as long as nothing moves between taking the light frames and taking the flats. We need to know where the dust etc is on the light images we took earlier, so if we move anything they on't be lined up, hence the need for making sure nothing moves.

One method is to place a white T shirt over the end of the telescope, stretched tight so as not to cause ripples etc, then face towards a bright area such as the daytime sky or brightly reflected light off a wall. The surface of the shirt needs to be very evenly illuminated so as not to cause arfificial gradients or vignetting. Again, how to do these and different methods are best advised from those that know. I've only ever taken two so far. And again, we need a number of these to be usefull.

One thing I did learn last night is that we need the Flat image to be correctly exposed, neither under nor over. Exposures will be shorter than with your light frames by quite a way in most cases. Mine were way over exposed which probably did more harm than good.

Example I found on the internet using a grey T shirt, note the vignetting, and dust bunnies/lens marks:

Bias - These serve to remove any other imperfections that the camera might give us. Bias frames are apparently not advised for some stacking software as it confuses the software, but this is what they are: Each time we take a picture, the sensor sees an image, that image is converted into a signal and stored as data. In that processing we sometimes get rogue bits of stray data. Bias frames capture that bad data and the stacking process removes it. Bias data is basically "made up" by the processor in the camera, it never exsisted in the image, nor on the sensor, so we are capturing the "thoughts" of the processor. To do this we take the shortest possible exposure with no light entering the camera. There is next to no chance of any additional hot pixels being generated, and no optical train to leave any images, so it captures any stray data the camera might be producing.

That's it, that's as much as i have picked up so far. Hopefully it is correct, and will serve as usefull to someone one day. If you know any better, please do post up. Anything I find out to be incorrect I will edit out so as not to mis inform. Anything anybody would like to add, please please do. It would be good to get all of this information in one place along with methods for getting each bit of data. I've done it to help explain a few points, and also so that I went through the process step by step for myself so hopefully I have remembered a bit more in the process of writing it :blush:

Edited by Perkil8r
Link to comment
Share on other sites

Mike being part of that discussion last night I must give you a pat on the back for this excelent write up.

I to have totally screwed up gathering the flats big time.

Now I even understand what Bias frames are for.

Look out NASA ,, Hubble has some compertition on the horizon. :lol: :lol: :lol:

Link to comment
Share on other sites

I might be wrong about bias frames, but that's what I understand of them. I hope more qualified folk than me might add to this so we can gather as much info in one place as we can to hopefully benefit folks like you and I.

Link to comment
Share on other sites

That's awesome. Thanks mike! I have been wanting to ask about this exact topic or ages but didn't know how to word it succinctly enough. Good work :)

Link to comment
Share on other sites

Excellent topic Mike - well written, easy to follow, and nicely explained. Good one mate :)

Link to comment
Share on other sites

I don't image, but your post has quickly explained a lot of stuff that I never really understood or delved into before. Glad you posted that, very interesting and useful info.

Really though, I am NOT being drawn into the dark side :P .

Link to comment
Share on other sites

I don't image, but your post has quickly explained a lot of stuff that I never really understood or delved into before. Glad you posted that, very interesting and useful info.

Really though, I am NOT being drawn into the dark side :P .

Even after Sunday? ;)

Link to comment
Share on other sites

Well, anything can happen :)

Darks, flats, lights, bias - millions of them.

I get dark adapted, use averted vision, take expensive bilberry supplements, and measure my pupils.

Just normal everyday stuff :D:P:D .

Link to comment
Share on other sites

Guest AstronomyShed

I use nebulosity to do my pre processing, I think it gives much better, crisper and more vibrant images, I agree, the bias thing can be confusing, the explanation above is a good analogy, but consider this, it's what I do and why, and Craig Stark agrees with me.

Bias, because it is in effect an electrical noise, is in flats, darks and your subs

So, calibration frames (darks and flats) are subtracted from your subs when you process, the thing is, your dark will subtract bias, and your flat will subtract bias, so, you have subtracted bias twice, meaning, you actually create, in effect, a 'reverse bias' effect. (hard to get your head round innit lol)

So, I take my bias frames, and in nebulosity I stack them in average mode to create a master bias.

I then load my flats in but I load them into the subs section, so nebulosity thinks they are subs, then i load the master bias and stack the flats, subtracting the bias and creating, in effect, a bias subtracted master flat.

Now, because the bias is removed from the flats, when I process using my subs, flats and darks, I'm only subtracting bias once.

Link to comment
Share on other sites

Bloody hell just when I thought I was begining to understand just a fraction of this bias thing as well. :wacko: :wacko: :wacko:

Link to comment
Share on other sites

I get it, I think....

So use the flats as if they are lights and remove the bias from them giving the flats a zero bias rather than a plus. Then use this resultant flat as your flat. That way when removing flats and darks from the lights, the flats have a zero bias, the darks a +1 bias (for sake of argument, and the lights also have +1. Lights minus darks means that (light +1)-((dark +1)+(flat +0)) = light with a zero bias. ie 1-1=0 rather than if we do it all in one go we have bias in the flats and darks plus we also have a bias as well which actually would be 1-1-1-1=negative 2

Jeepers.... maths as well as astrophotography, I need to lay down while I think of a way to word this better and add it into the original post :wacko:

Link to comment
Share on other sites

Guest AstronomyShed

I did a video of how I do it in nebulosity, now granted, it's a lot more drawn out procedure than just throwing them all into DSS but after trying it a couple of times I was totally convinced that my images were coming out better, more vibrant colours and better detail, if you have nebulosity and have never tried pre processing with it, give it a go, you may be converted as I was :-

[media]http://www.youtube.com/watch?v=OMhLYg-xPuc

Link to comment
Share on other sites

Well Dion many thanks for the vid.

That just explained everything I needed to know.

Right I off to download Neb

Link to comment
Share on other sites

Guest Steve

Hi Dion. Great video.

With regard to subtracting the bias from the flats, why do it to each of the flats before you stack them? Wouldn't it be the same to create the master flat first and then subtract the bias?

(I've paused watching at this point as I didn't want to forget the question - apologies if this is answered later!)

(Got to the end - no it wasn't!)

Wow - so the Dark Arts are going to take some time to master!!!

Edited by Steve
Link to comment
Share on other sites

Nice write up and following discussion.

Bias frames - If we think of the pixels in the sensor as buckets each frame we capture starts at 0 and as the bucket fills (Photons) we get a final value depending on how full each bucket is. Now after the data is saved by the camera it empties each bucket back to 0, now as we know nothing is perfect and therefore each buckets value will be almost 0 rather than bang on 0. So Bias frames capture this reset value of each bucket so it can correct them in processing.

I believe DSS takes care of when to subtract the Bias master in processing.

:unsure::wacko: :wacko: :wacko:

Link to comment
Share on other sites

I've been thinking about this subtracting of bias from flats business. In Registax you can manual align or choose no align. Would it be at all feasable or doable to subtract the bias from flats in Registax?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.