This project will show you how to create a fully automated ground station that will receive and decode NOAA weather satellite images and upload them to your own website served from an Amazon AWS S3 bucket. With this project you don’t need your own server or have to run your own website infrastructure. Have a look at my AWS site that is updated automatically all day long.
Oh, you want a site like this, too? Full of images you decoded from space? Then let’s get started, my friend.
Here’s what you’ll need:
- a modern Raspberry Pi (version 3 or 4), probably with Wi-Fi since it may be deployed outdoors. I used a RPi 3 model B. I have heard that a RPi Zero may not be powerful enough.
- an RTL-SDR dongle. I recommend the RTL-SDR V3 dongle from the excellent RTL-SDR.COM blog.
- an AWS account for hosting images and web content in an Amazon S3 bucket. You can sign up for the free tier for a year, and it’s still cheap after that.
- a simple dipole antenna with elements 21 inches (53.4 cm) long and that can be adjusted to have a 120 degree angle between the elements. Here’s a great article on design or you can just buy this dipole kit, also from the RTL-SDR.COM blog.
- coaxial cable to go from your antenna to Raspberry Pi + RTL-SDR dongle. The dipole antenna kit comes with 3m of RG174 coax, but I used 10 feet of RG58 coax.
This is a very long article with lots of steps, so take your time — I won’t be able to help everyone debug all their issues. I won’t go into the details of using a Raspberry Pi for the first time — this project assumes you know your way around the Pi and are comfortable with installing software on it. If you have never used AWS before, I suggest you set up an account and get familiar with what S3 is.
Weather Satellites and RTL-SDRThis probably isn’t the first you’ve read about using a software defined radio (SDR) to receive weather satellite images. This type of project has been documented before. Sometimes the emphasis is on software defined radio hardware and techniques, sometimes it’s about antenna design, or maybe the article is written by a real weather enthusiast who always use the abbreviation “wx” for weather. I’m not an expert in any of these areas, but the idea of receiving images directly from weather satellites as they fly overhead has intrigued me for many years. This has all gotten a lot easier with RTL-SDR dongles, more powerful Raspberry Pi computers and simpler antenna designs that get the job done. I gave this project a try recently using this well-written Instructables article, a totally hacked-together antenna I made, and a very old rtl-sdr dongle:
and on my very first attempt, I decoded this image from NOAA19 as it passed over my area:
From that moment, I was hooked. I played around with different antennas and such, but found it tedious to always copy the images from my outdoor Raspberry Pi to my computer so I could look at them. I resolved to automate the uploading of images to an S3 bucket and to improve upon the scripts from the Instructables article. This is the overall solution:
Your ground station website functionality will be completely in client-side JavaScript. It will use the AWS JavaScript SDK to make API calls to S3. The scripts that run on the Raspberry Pi also use some Node.js scripts to upload to S3. There are a lot of steps to get everything set up:
AWS SDK Credentials
The scripts that run on the Raspberry Pi use some Node.js scripts and the AWS JavaScript SDK to upload to S3. You need to get your credentials. These two articles show you how to get your credentials and store them for Node.js access:Getting your credentialsLoading Credentials in Node.js from the Shared Credentials File
Your credentials file on the Raspberry Pi ~/.aws/credentials
will look like this:
[default]
aws_access_key_id = YOUR_ACCESS_KEY_ID
aws_secret_access_key = YOUR_SECRET_ACCESS_KEY
Also set the default region where your S3 bucket will reside in ~/.aws/config
. For example:
[default]
output = json
region = us-west-2
Create an S3 Bucket
Now create an S3 bucket for public website hosting. I’m using the bucket name nootropicdesign.wx
for mine. The instructions are in this article:Setting up a Static Website
At this point you should be able to load a simple web site from your new bucket. You might want to upload a simple index.html file and try to load it in your browser with http://BUCKETNAME.s3-website-REGION.amazonaws.com/.
<!doctype html>
<html>
<head><title>S3 test</title></head>
<body>Hello from S3</body>
</html>
Create an Identity Pool in Cognito
To give public users the ability to access your S3 bucket using the AWS SDK, you need to set up an identity pool and create a policy allowing them read access to your bucket. This is done using Amazon Cognito. A good guide for granting public access to your bucket is described in this article that shows how to serve images from an S3 bucket (just like we are). It’s somewhat confusing to follow the steps, so take your time.
Step 1: create an Amazon Cognito identity pool called “wx image users
” and enable access to unauthenticated identities. Be sure to select the region in the upper right of the page that matches the region where your S3 bucket was created! Make note of the role name for unauthorized users, e.g. “Cognito_wximageusersUnauth_Role
“.
Step 2: on the Sample Code page, select JavaScript from the Platform list. Save this code somewhere, because we need to add it to the web content later. It looks something like this:
// Initialize the Amazon Cognito credentials provider
AWS.config.region = 'us-west-2'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'us-west-2:1d02ae39-3a06-497e-b63c-799a070dd09d',
});
Step 3: Add a Policy to the Created IAM Role. In IAM console, choose Policies
. Click Create Policy
, then click the JSON tab and add this, substituting BUCKET_NAME with your bucket name.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME"
]
}
]
}
Click Review policy and give your policy a name, like wxImagePolicy
.
In IAM console, click Roles
, then choose the unauthenticated user role previously created when the identity pool was created (e.g. Cognito_wximageusersUnauth_Role
). Click Attach Policies
. From the Filter policies menu, select Customer managed
. This will show the policy you created above. Select it and click Attach policy
.
Step 4. Set CORS configuration on the S3 bucket. In the S3 console for your bucket, select Permissions
, then CORS configuration
.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Raspberry Pi SetupMost of these instructions are from steps are from steps 2 and 3 of the Instructables article I mentioned earlier.
Install Required Packages
First, make sure your Raspberry Pi is up to date:
sudo apt-get update
sudo apt-get upgrade
sudo reboot
Then install a set of of required packages.
sudo apt-get install libusb-1.0
sudo apt-get install cmake
sudo apt-get install sox
sudo apt-get install at
sudo apt-get install predict
I used Node.js in some of the scripting, so if you don’t have node
and npm
installed, you’ll need to do that. In depth details are here, and I easily installed with:
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs
Using your favorite editor as root (e.g. sudo vi
), create a file /etc/modprobe.d/no-rtl.conf
and add these contents:
blacklist dvb_usb_rtl28xxu
blacklist rtl2832
blacklist rtl2830
Build rtl-sdr
Even if you have rtl-sdr already built and installed, it’s important to use the version in the GitHub repo keenerd/rtl-sdr, as this version’s rtl_fm
command can create the WAV file header needed to decode the data with sox
.
cd ~
git clone https://github.com/keenerd/rtl-sdr.git
cd rtl-sdr/
mkdir build
cd build
cmake ../ -DINSTALL_UDEV_RULES=ON
make
sudo make install
sudo ldconfig
cd ~
sudo cp ./rtl-sdr/rtl-sdr.rules /etc/udev/rules.d/
sudo reboot
Install and Configure wxtoimg
The program wxtoimg
is what does the heavy lifting in this project. It decodes the audio files received by the RTL-SDR receiver and converts the data to images. The original author of wxtoimg has abandoned the project, but it is mirrored at wxtoimgrestored.xyz.
wget https://wxtoimgrestored.xyz/beta/wxtoimg-armhf-2.11.2-beta.deb
sudo dpkg -i wxtoimg-armhf-2.11.2-beta.deb
Now run wxtoimg
once to accept the license agreement.
wxtoimg
Create a file ~/.wxtoimgrc
with the location of your base station. As usual, negative latitude is southern hemisphere, and negative longitude is western hemisphere. Here’s my location in Minnesota, USA.
Latitude: 45.0468
Longitude: -93.4747
Altitude: 315
The program predict
is used by the automated scripts to predict weather satellite orbits. Run predict
to bring up the main menu:
--== PREDICT v2.2.3 ==--
Released by John A. Magliacane, KD2BD
May 2006
--==[ Main Menu ]==--
[P]: Predict Satellite Passes [I]: Program Information
[V]: Predict Visible Passes [G]: Edit Ground Station Information
[S]: Solar Illumination Predictions [D]: Display Satellite Orbital Data
[L]: Lunar Predictions [U]: Update Sat Elements From File
[O]: Solar Predictions [E]: Manually Edit Orbital Elements
[T]: Single Satellite Tracking Mode [B]: Edit Transponder Database
[M]: Multi-Satellite Tracking Mode [Q]: Exit PREDICT
Select option ‘G’ from the menu to set your ground station location:
* Ground Station Location Editing Utility *
Station Callsign : KD0WUV
Station Latitude : 45.0468 [DegN]
Station Longitude : 93.4747 [DegW]
Station Altitude : 315 [m]
Enter the callsign or identifier of your ground station
You can enter whatever you want for the callsign (I used my amateur radio callsign). When entering the longitude, note that positive numbers are for the western hemisphere and negative numbers are for the eastern hemisphere. This is opposite convention, so make sure you get this right or you’ll be listening when there’s no satellite overhead!
Get the Automation Scripts and Configure
I’ve completely refactored the scripts originally posted in the Instructables article and added Node.js scripts for creating thumbnail images and uploading all images to S3. The git repo can be cloned anywhere on your Raspberry Pi. The configure.sh
script sets the installation directory in the scripts and schedules a cron job to run the satellite pass scheduler job at midnight every night.
git clone https://github.com/nootropicdesign/wx-ground-station.git
cd wx-ground-station
sh configure.sh
cd aws-s3
npm install
In the file aws-s3/upload-wx-images.js
set REGION, BUCKET, and LOCATION to the correct values. This Node.js script prepares the images for upload by creating thumbnail images, printing some metadata on the images, and creating a JSON metadata file for each image capture. The LOCATION string will be printed on the images that you capture. Here are my values just for reference.
var REGION = "us-west-2";
var BUCKET = "nootropicdesign.wx";
var LOCATION = "nootropic design ground station, Plymouth, Minnesota, USA 45.0468, -93.4747";
Also set the REGION and BUCKET correctly in the files aws-s3/upload-upcoming-passes.js
and aws-s3/remove-wx-images.js
. Plug in your own values:
var REGION = "us-west-2";
var BUCKET = "nootropicdesign.wx";
Now we need to make some changes to the web content. The web interface uses Mapbox to draw the live maps of the next upcoming satellite pass. You’ll need to create an account at Mapbox to get an access token. Their free tier lets you load 50, 000 maps/month, so you are not likely to have any real costs. When logged into Mapbox, get your account token from https://account.mapbox.com/.
Now in the file website/wx-ground-station.js
, set your bucket name, AWS region, AWS credentials (the Cognito identity pool info you saved above), Mapbox token, and your ground station info. Some of my values are shown here for reference.
var bucketName = 'nootropicdesign.wx';
AWS.config.region = 'us-west-2'; // Region
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
IdentityPoolId: 'us-west-2:1d02ae39-30a6-497e-b066-795f070de089'
});
// Create a mapbox.com account and get access token
const MAP_BOX_ACCESS_TOKEN = 'YOUR_MAPBOX_TOKEN';
const GROUND_STATION_LAT = 45.0468;
const GROUND_STATION_LON = -93.4747;
const GROUND_STATION_NAME = 'my ground station';
Upload the Web Content to S3
Upload the the contents of the website
directory to your S3 bucket using the S3 console. Since you probably edited the files on your Raspberry Pi, you might need to copy them to your computer where you are accessing AWS using a browser. Whatever the case, these files need to be uploaded to the top level of your bucket. IMPORTANT: be sure to grant public access to the files when you upload them!
index.html
wx-ground-station.js
tle.js
logo.png
Of course, you can replace logo.png
with your own, or just remove the <img>
tag from index.html
.
Test Everything Out
Now that everything is configured, let’s run the scheduling script to schedule recording of upcoming satellite passes. This way you can have a look today instead of waiting until they get scheduled at midnight. This step will also upload a JSON file with the upcoming passes info to your website.
cd wx-ground-station
./schedule_all.sh
You can now visit your AWS S3 website endpoint at
http://BUCKETNAME.s3-website-REGION.amazonaws.com/
Once again, mine is here: http://nootropicdesign.wx.s3-website-us-west-2.amazonaws.com/
Even though you don’t have any images captured, you should be able to see the next upcoming pass. The next thing to do is make sure the scripts work correctly to record the audio file, process it into images, and upload to your bucket. You can watch the logs in the wx-ground-station/logs
to debug any errors.
The wxtoimg enhancements that are displayed depends on what sensors were active when the images were captured. If sensors 3 and 4 were active (usually at night), then the thermal enhancement will be shown. Otherwise a multispectral analysis enhancement will be shown.
Not all images you capture will be good. I feel lucky if even half of my satellite passes produce recognizable images. You can clean up bad ones by using the script aws-s3/remove-wx-images on the Raspberry Pi. Just provide the key to the particular capture as an argument to remove all the images and the metadata from the S3 bucket.
node aws-s3/remove-wx-images NOAA19-20191108-162650
Hopefully in the next few hours you’ll be able to see some images uploaded, depending on when satellites are scheduled to fly over. You may get up to 12 passes per day, usually 2 for each of the NOAA satellites in the morning, then 2 more for each of them in the evening. Let us know if this project worked for you!
Fine TuningThe script receive_and_process_satellite.sh
uses the rtl_fm
command to read the signal from the RTL-SDR receiver. The -p
argument sets the PPM error correction. I have mine set to 0, but you may want to adjust. See this article for details.
I have also installed a low noise amplifier (LNA) to improve my reception (results are mixed). My LNA can be powered with a bias tee circuit and controlled with the rtl_biast
command. If you are using an LNA like this, you can install rtl_biast
as documented here and uncomment the rtl_biast
lines in receive_and_process_satellite.sh
which turn the LNA on and off.
Comments