Astrophotography III: Deep Sky Objects

Updated: April 28th, 2023.  Important: Before reading this article, make sure you read my two previous articles on astrophotography.

Astrophotography I: Star Trail Images

Astrophotography II: Milky Way Images

In this article we are going to discuss how to get started shooting images of deep sky objects like galaxies and nebulae. To start, I need to give a quick disclaimer. The genre of astrophotography takes years of practice to gain proficiency at and many would argue is never truly something you can master as there are always new technologies, pieces of equipment, and techniques to master. I highly recommend setting your bar low in your mind when you get started because it’s easy to get discouraged when your first image doesn’t look like the ones you see online.

To illustrate, the image on the left (below) is my first deep sky astro image (captured in 2012) and on the right is one that I created earlier this year. This is what eight years and countless hours of learning resulted in.

So how do we get started?

Well, put simply, deep sky astrophotography is almost identical to Milky Way photography (everything in the previous article applies to this one!) with the exception that we use a longer lens (in some cases a much longer lens). With the increased focal length, everything also gets a whole lot harder and thus the challenge begins.

Selecting a Lens

Let us start by discussing lenses and then we will turn our focus to the different challenges and how to overcome them.

Lens choice is simple. I recommend anything between 200mm and 400mm to start with. This will allow you to get started and shoot most of the larger objects in the sky. If you already own a long lens such as a 70-200mm that is awesome, just use that. If not, I highly recommend the William Optics RedCat 51. It is a fantastic starter scope with a 250mm focal length and can even be used as a camera lens (albeit manual focus)!

What I do not recommend is using the telescope you bought for the family five years ago that has been sitting unused in the corner of the living room. Most “box store” scopes have very long focal lengths (and slow f/ratios) and are generally not optimized for photographic use. Some of them can be used for astrophotography, but only once you know a bit more and are comfortable with the basics.

Okay, now that we have discussed lenses, let us start to tackle what makes deep sky images so hard.


First is movement. Like we talked about in the previous article, because the sky is always moving overhead, we need to keep our shutter speed fast enough to ensure round stars. If we use too long of a shutter speed, we risk getting oblong or trailed stars.

The 400 rule tells us that the longest shutter speed we can safely use with the SpaceCat (250mm) is about 1.5 seconds (400 / 250 = 1.6). This is also an approximation and, in some cases, will still show some trailing. In this situation I would use 1 second, just to be safe.

At night, with an f/4.9 scope at 1 second, you do not get much light. To make up for it, we will need to increase our ISO which will reduce our dynamic range and consequently reduce our signal-to-noise ratio (SNR) which will affect how clean and noise free our data is. Not good.

So, how to we fix it?

Well, we move the telescope and the camera with the sky.

If we could magically have our scope track with the night sky perfectly (no truly perfect mount exists), we would be able to use an infinite length shutter speed, right? Well, kind of. At some point we will simply get too much light and end up with an over-exposed image, and of course, the night won’t last forever. But this would allow us to go for 1, 2, 4, 10, or even 30 minutes. Most of my images are made with shutter speeds of 20 minutes!

Let us set a reasonable shutter speed target with a tracker of 2 minutes; most entry level trackers can handle a 2-minute exposure without much issue. Two minutes compared to one second is 128 times lighter (or seven stops). That means that instead of using ISO 12800 at one second, we could use ISO 100 at 120 seconds and have the same exposure. The advantage here, though, is in the lower ISO. The SNR and dynamic range of ISO 12800 compared to ISO 100 is huge. Also, instead of pushing the shutter button 120 times to take 120 different one-second exposures, you can push it once. Awesome.

So, in my honest opinion, if you are truly wanting to get into astrophotography, get a tracker. They can be purchased for only around $250 and are truly amazing to have.

Can you do it without one?

Yes, but you are at the mercy of the 400 rule. With one, you are limited only by the tracker’s precision (usually you can expect 120 seconds on an entry level tracker) and how much light pollution you have in the area you are shooting.

Enjoying this article?

Subscribe to our blog to be notified when we post new content!


How else can we increase our SNR?

Well, let us start by looking a bit more in-depth at noise. Put simply, noise is caused by your sensor doing a bad job counting the amount of light that falls onto it. Let me give you an example. We need to vastly simplify things before we get into it. Let’s start by saying that we have a one-pixel monochrome sensor. Seriously, picture this in your head right now. Our sensor would be capable of capturing one pixel of information ranging from pure black to pure white (no color because it is monochrome).

Now let us use this sensor to photograph a perfect light source. This light source puts out exactly 10 units of light. If your sensor were perfect, you would expect the one pixel to record exactly 10 units of light every single time you take another image. The problem is that is not how the world works. Sometimes it will record 10 units, other times it will record 9, or 12, or even in rare cases, 2. Now, these are all made-up numbers and they hold no significance, but the point is that noise stops the sensor from always recording the correct value.

Below is an image of what the captured values would look like probabilistically (they follow a Gaussian distribution). We can see that the “most likely” value to capture is 10, with numbers on either side getting less and less likely the further we get from 10. (In this case, it would be nearly impossible for your sensor to record 100).

Okay, enough math and science — back to photography! How does this affect us, and how do we fix it?

Well, let us imagine that we are going to use our magic simple sensor to take one photo of our light source and one photo only. We do that and our camera recorded a 9. If we never take another photo, we will always think the light source is 9 units bright. Is that correct? Nope. But if we only take one image, we are at the mercy of noise. Sometimes — most of the time — we will get the right value, but sometimes our camera will miss.

How can we be sure (or surer) that we have the right value recorded? We can take more photos.

Think about it. If we use our magic sensor to shoot 200 images of the light source and average the recorded values together, what will we get? Pretty darn close to 10 units, right? This is because the average of all our measurements is the correct value, but we need many measurements to ensure we aren’t getting results that are influenced by noise.

You see, noise is random with a fixed center (the actual value). The more measurements we take, the closer we will get to the actual “noise-free” value.

Now, this is just one pixel. Let us multiply it by 20 million to represent most modern digital cameras. Imagine all 20 million pixels on your sensor recording different values because each of them is subject to noise. You can probably see why noise looks like a speckly pattern on your images. That speckle is caused by some pixels recording data too bright, and some too dark. All caused by noise.

But the solution works with 20 million pixels, too. Take 100 pictures of the same thing without moving your camera and average the recorded value of each pixel through the stack. If you make a new image where each pixel is the average of all 100 pixels that were recorded in that location, you will have a much cleaner image with a much-improved SNR.


This is called “stacking” and it’s the secret sauce of astrophotography. The objects in the sky don’t move image-to-image (if you have a tracker) and so we are able to take hundreds of images of the same object, average them all together and vastly minimize the amount of noise in our images.

Pro (nerd) tip: The relationship between SNR and increased number of images is not linear. In fact, the increased SNR is proportional to the square root of the number of frames. This means that if we take 4 times more frames, we are only getting 2X the SNR. If we take 100 times the number of frames, our SNR only increases by 10X.

Now, how does this stacking thing work. Well, on the capture end it is quite simple. Lock your camera on one target (you need a tracker) and shoot as many shorter sub-exposures (“subs”) as you have time for. If your mount can handle 60 second subs without getting trails, and you want 2 hours of total integration time, you will need to shoot 120 frames. You can even combine frames from a few nights if you are capturing the same area of the sky.

When it comes to stacking on the computer, you will need a software program to take control of the stacking process. While there are hack ways to do this in Adobe Photoshop, it is much better to use one of these dedicated astrophotography stacking applications.

All of them work relatively similarly and plenty of tutorials are already online on how to do this. Here is a YouTube video on this process that I created last year. The magic math going on behind the scenes here is exactly what we discussed earlier in the article; it is simply an averaging of each pixel through the many images.

Another thing of note that these programs do is ensure that your images are perfectly aligned. Since no mount is perfect, there will be some drift between subs. This is why we are limited in our maximum sub length before we get trailed stars. These stacking programs align each star through the entire stack and ensure that the averaging is as accurate as possible.

Once the stacking is complete, you’ll be left with a final stacked image. Don’t expect much. Usually these look super boring and mostly black. They will need a lot of love in the editing process to pull out some data. You’ll want to use a program like Photoshop or PixInsight for editing. We will talk about that in a later article.

Fun fact: the image above is the unprocessed version of the thumbnail image for this article!

Okay. So where does that leave us?

As you can see, astrophotography is a pretty technical genre. In future articles we will discuss gear, calibration frames, editing, guiding, and a whole lot more! I think we’ve done enough for today.

Clear skies!


  • Forest Chaput de Saintonge

    Forest Chaput de Saintonge directs Rocky Mountain School of Photography with his wife, Sarah. He has been immersed in photography since he was born. He grew up in Missoula and began taking photos with an SLR when he was seven years old. He started working for Rocky Mountain School of Photography at age 13. During his free time, he likes to become a master at new things, build stuff, run, hike, bike, photograph, and be an amateur astronomer. Forest has a BA in Astrophysics, just because. He really enjoys teaching and loves to help students understand concepts thoroughly. Forest has vast experience working with and teaching Adobe Lightroom and Photoshop, and has worked many hours in the black and white darkroom.