Serverless DDNS with AWS Lambda and Rust

Will Dixon
12 min readAug 28, 2020

--

As a followup to my previous article, I want to write a Dynamic DNS service that could update my Route53 domains. Serverless architecture isn’t new, but I have never written anything against AWS Lambda before. I figured this would be a great project to get my feet wet and keep AWS cost down.

For those not familiar, serverless architecture is not an architecture without servers. There are still servers involved. But an excellent way to think about it is to run things when needed and turn them off when completed. Going serverless keeps the cost to reflect exactly how you use the architecture instead of paying a set cost per month. Since DDNS update calls will happen rarely, I only pay for when the functions run and not paying $5 a month to keep an instance running 24/7.

I have heard feedback from my previous article that I could just use a Raspberry Pi as my DDNS client and update Route53 directly. A Raspberry Pi would work. But where is the fun in that? I have a very lovely router that can make the calls I need to already, so let’s use it!

I decided to use Rust as I don’t get to work in it unless I have a side project that can use it. I use Go at work, and I wanted a change of pace, but still wanted the performance and little runtime overhead.

Recap on DDNS protocol

I highly suggest you read through my previous article on how I determined what protocol I will use as a guide.

The following is the basic structure of a query. An Authorization header and User-Agent header are required as well. The Authorization header should be populated using Basic authentication.

GET /nic/update?hostname=home.mydomain.com&myip=192.168.0.1

Plan

I need a way to authenticate with the endpoint. So I will need another endpoint POST /user that can create a new user with a password and domains that the user can update. This endpoint will also need to be protected so that other individuals cannot make users on a whim. I will use the X-API-KEY header for this.

I will also need a data store to keep this information. Preferably one that I don’t have to pay a monthly cost on and instead, a usage charge. AWS DynamoDB fits that bill nicely.

Let’s get to work then.

Code to follow along with can be found here: https://github.com/dixonwille/aws-ddns. I will only be sharing snippets as these files are pretty long.

I am going to be using a tool to help me provision the infrastructure. I have used CloudFormation and Terraform before, but Serverless (https://www.serverless.com/framework/docs/getting-started/) can help standing up serverless infrastructure. Under the hood, it creates a CloudFormation stack to provision the resources anyways.

To get started run the following:

sls create -t aws-provided -n ddns -p aws-ddns

That command will create an application called ddns in a folder called aws-ddns that uses AWS’s provided runtime for Lambda. You can remove the bootstrap and handler.sh files as I will build my own structure. Provided runtime is required as Rust is not supported natively by Lambda, but instead through AWS’s packages for Rust to work with the provided runtime.

There is a plugin for Serverless called serverless-rust (https://www.serverless.com/plugins/serverless-rust). Using it allows Serverless to build and deploy my application. Navigate inside of the directory created above and run sls plugin install -n serverless-rust. This command will create a package.json file. That does feel strange in a Rust project, but that is how Serverless installs its plugins.

I also need to start creating the basic structure of our Rust project. I did this manually since I will have multiple members in my workspace.

My project structure

My Cargo.toml file in the root only contains the following:

[workspace]
members = ["create_user", "nic", "ddns_core"]

The workspace is a way to have a single repository but different crates. I broke it up this way for the simplicity of keeping my two functions separate. Notice that create_user and nic will both build a binary (because they contain a src/main.rs file). Where ddns_core will only be a library (because of the src/lib.rs file).

The serverless.yml file defines how my functions get executed in AWS and other information needed to create the application from nothing. And the requests/create_user.json is a JSON Schema file used by AWS to validate the body of requests coming in. This file is useful as it will not call the Lambda function if a bad request comes in.

Configuring Serverless

Now I have to start configuring Serverless, so it knows what I want. To set up AWS credentials, you can follow this guide https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html.

Serverless has a way to use variables throughout its configuration. Anything in ${} is a variable. Read more on them here: https://www.serverless.com/framework/docs/providers/aws/guide/variables/.

To start with, I need to add some more information to our provider (AWS):

provider:
name: aws
runtime: rust
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'us-east-1'}
environment:
USERS_TABLE_NAME: ${self:custom.tableName}
endpointType: REGIONAL

I need to change runtime to rust so that Serverless knows to use the plugin I installed. On top of this, adding a stage and region allows me to have multiple deployments to a single account (development and production) and specify which AWS region these resources live. I also have set up an environment variable available to all my functions, so they know which table in DynamoDB they should be using. Last but not least, endpointType. This field is set to REGIONAL, as I don’t need edge support for these functions. I will only be consuming them from one location. Edge support is a feature that allows the function to run closer to the user who is calling on the function (allowing for lower latency).

I also need to give my functions permission to call certain AWS resources:

iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
Resource:
- 'Fn::Join':
- ':'
- - 'arn:aws:dynamodb'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'table/${self:custom.tableName}'
- Effect: Allow
Action:
- route53:ChangeResourceRecordSets
Resource:
- arn:aws:route53:::hostedzone/*
- Effect: Allow
Action:
- route53:ListHostedZones
Resource:
- "*"
apiKeys:
- ${opt:stage, 'dev'}-adminKey

This block of code exists under the provider block above (tabbed in).

The iamRoleStatements are IAM statements that will append to the Lambda role that Serverless creates. Here, I am giving my functions the minimum permissions possible to accomplish their goals. Such goals include reading and writing to the specific DynamoDB table and reading and writing to Route53.

The apiKeys tell Serverless that I want it to create an API key that I can use to secure certain functions. The API key will be useful for the create_user function defined later.

Now let me set up the serverless-rust plugin:

custom:
rust:
dockerless: true
package:
individually: true
plugins:
- serverless-rust

The plugins field specifies that I want to use the serverless-rust plugin for this application. The package.individually states I want each of my functions in their separate zip files. custom.rust.dockerless says I want to build my Rust programs on our machine instead of inside a docker image.

The following command needs to be ran so that local builds are built for Lambda’s architecture:

rustup target add x86_64-unknown-linux-musl

I still need to define our DynamoDB instance:

resources:
Resources:
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: username
AttributeType: S
KeySchema:
- AttributeName: username
KeyType: HASH
BillingMode: PAY_PER_REQUEST
PointInTimeRecoverySpecification:
PointInTimeRecoveryEnabled: false

This is just a raw CloudFormation blob that will create the DynamoDB table for me. Notice that I can still introduce Serverless variables!

Finally, onto the functions!

functions:
create_user:
handler: create_user
description: Creates an authorized user to update DNS record
events:
- http:
path: user
method: post
private: true
request:
headers:
Content-Type: true
schema:
application/json: ${file(requests/create_user.json)}
nic:
handler: nic
description: Update DNS records
events:
- http:
path: nic/update
method: get
request:
headers:
User-Agent: true
Authorization: true
querystrings:
hostname: true
myip: true

That is a lot to chew through, but I’ll point out important points. The handler is the name of the crate in the workspace I want to use for that function. The events section tells Serverless what triggers this function. I am saying an http request triggers the function in both cases. I then define how the request looks. In the create_user function, the schema file referenced is a JSON Schema (https://json-schema.org/). I also specified that the create_user function is private. This file tells AWS that the caller must supply a valid X-API-KEY header on the request.

Onto the Code

Before I can deploy, I need to write some code that can be used in Lambda functions.

use lambda_http::{handler, lambda, Context, IntoResponse, Request};
use serde_json::json;

type Error = Box<dyn std::error::Error + Sync + Send + 'static>;

#[tokio::main]
async fn main() -> Result<(), Error> {
lambda::run(handler(hello)).await?;
Ok(())
}

async fn hello(_: Request, _: Context) -> Result<impl IntoResponse, Error> {
// `serde_json::Values` impl `IntoResponse` by default
// creating an application/json response
Ok(json!({
"message": "Go Serverless v1.0! Your function executed successfully!"
}))
}

I found this example in https://github.com/softprops/serverless-aws-rust-http repository. It uses the lambda_runtime crate and helper crates (https://github.com/awslabs/aws-lambda-rust-runtime). I should note that they are using the master branch of the crate instead of what is available on https://crates.io. I tried using what was in https://crates.io and was not successful in my endeavors.

The main function is decorated with tokio::main attribute so that the Lambda runtime can take advantage of the tokio runtime for asynchronous functionality. The main function is the entry point into my Lambda function. I tell Rust to start the Lambda runtime with a specific handler called hello. hello is just a function that takes a Lambda request and context and returns either a response or an error.

While I was developing my functions, I found a few things out to keep a note of:

  • When the function returns an error, Lambda does not give a response, nor is there a log message. So make sure to produce a response, or the client will get a 502 error. Also, make sure you log your errors to stdout when you can see them in CloudWatch.
  • The Handler trait is tough to implement manually. I usually like to create a struct with my static information (AWS clients and configuration), but I couldn’t figure out how to implement the Handler trait. It could be because of my lack of experience in Rust.
  • The function has to be in that format; I can’t pass arbitrary data plus the request and context. For a strongly typed language, this makes sense. This limitation does mean I have to create my static information struct I described previously on each request. As for this project, it is fine, but I can see this being a performance bottleneck for more significant projects.

Deployment

Without fleshing out the logic and only using the above example for both functions, I can deploy our code! Running the following will build my functions and create my resources in AWS.

sls deploy

I get warnings that state when using AWS_PROXY, you cannot use the request’s querystrings or headers portion. I ignored these as I think it is nice to have them in the serverless.yml file to explain better what I expect the function to expect.

If you use this command in a CI/CD pipeline, make sure to add --conceal to the end or else your API Key will be printed to the console.

This function will output the endpoints you can hit publicly! Calling this endpoint with valid data (because I told AWS I have specific requirements) will result in a JSON response that has a message saying “Go Serverless v1.0! Your function executed successfully!”.

After building out the business logic, to deploy to a new stage, you just add -s <name> to the end of the command above. Example:

sls deploy -s prod

Business Logic

I will not be covering the code as it is viewable here: https://github.com/dixonwille/aws-ddns. But let me get into the logic of each of my crates to understand better what is going on.

ddns_core Crate

Let me start in the ddns_core crate. This crate is the home to my connections to AWS (Route53 and DynamoDB functions). I am using the rusoto crate and its members to help (https://github.com/rusoto/rusoto). rusoto makes the actual API calls to AWS so I can retrieve and modify information. In this crate, I have a Client struct and a User struct that are the main entry points into this crate.

The Client struct has the following methods:

  • get_user — returns a User struct based on username populated from DynamoDB.
  • put_user — takes a User struct and stores it in the DynamoDB table.
  • validate_user — takes a username, raw password, hostnames, and the user-agent then validates the information against DynamoDB table. It verifies the user exists, the password is valid for that user, and is authorized to update the hostnames.
  • update_hostnames — makes the appropriate calls to Route53 to update the records. I have to list the hosted zones for the account first, so I know which zones I need to update. I then batch the hostnames by zone and send off the request(s) to Route53.

The User struct is a little more straight forward.

  • new — returns a User struct from the username, raw password, and hostnames.
  • set_password — is not public to the crate but worth mentioning as new does call it. This function turns the raw password into a hashed password before setting the field. It uses Argon2id hash and ChaCha for the Salt.

It was a little tricky getting the structs how I needed them for DynamoDB, so I created a User struct that can turn into the appropriate DynamoDB object. I made an extension trait on top of AttributeValue for turning Strings and HashSets into the proper type and retrieving the values. I also created an extension trait for HashMap<String, AttributeValue>, which is the final object that DynamoDB expects. Look at create_user/src/client.rs for the actual implementation of these traits.

I do keep my error handling in ddns-core as my functions had similar errors. I did add a way to turn my errors into a response that I could then send back to the users. This functionality allowed me to use Rust’s Result type, and right before returning to the client, turn my ResponseErrors into an appropriate HTTP response. Look at create_user/src/error.rs for actual implementation.

From here on, it is pretty simple business logic for my functions!

create_user Crate

The create_user crate is my POST /user function.

  1. Validate the request coming in. (username and password length for example)
  2. Get the user from DynamoDB to make sure that the username doesn’t already exist
  3. Create a new User struct from the request
  4. Put the User in DynamoDB. Remember that upon instantiation of the User struct, the password gets hashed.

nic Crate

The nic crate is my GET /nic/update function.

  1. Validate the request coming in. (have the appropriate headers and query parameters). This section also parses out the Authorization header.
  2. Validate that the user is authorized to update the hostnames that were requested.
  3. Update the hostnames with the IP address requested.

Putting it all together

Now that I have a fully functioning application, I can deploy it out! Once deployed, all I have to do is make a POST request to /user with the appropriate body and X-API-KEY header. The key is found in the API Gateway section of the AWS console. The following is an example body:

{
"username": "someuser",
"password": "my_awesome_password",
"domains": [
"home.mydomain.com"
]
}

I can use the AWS provided domain to hit my endpoints, but it may be nice to have it on a custom domain. Setting that up is pretty straight forward since the domain I want to use is in Route53.

  1. Install serverless-domain-manager via sls plugin install -n serverless-domain-manager (https://github.com/amplify-education/serverless-domain-manager)
  2. Create an ACM certificate with the domain I want to use. I could have used a wild card but requires an extra field in the configuration explained below.
  3. Add the following to the custom block of the serverless.yml file.
customDomain:
domainName: ${opt:domain}
basePath: ''
stage: prod
endpointType: regional
autoDomain: true

The above configuration will use the flag --domain <domain_name>, only apply to the stage prod, and does not require any commands before execution of deploy.

If I wanted to use a wild card certificate, I could add the field certificateName with the certificate’s name, including the * character.

Conclusion

AWS serverless architecture makes it affordable to run cloud applications. Keeping costs small is essential for hobbyists who like offloading some home servers to the cloud.

I am still a novice when it comes to Rust, and it showed in this project. I have some match statements that are heavily nested, making it harder to read. I am also not too keen on how I handled errors. It felt messy. It may be the Go inside of me that is messing me up. I understand traits, but I lack identifying where creating a trait could help me.

As for the Serverless tool, their documentation was excellent! It made creating the infrastructure so much easier. If I decided to go with Terraform, which I was considering, I would have had to figure out the whole “building before creating the Lambda resource” portion of the deploy. But Serverless let me abstract that away and handled it beautifully!

There was a pain point. I couldn’t quickly test the functions out locally. I know there is a way to set it up, but I did not put effort into figuring it out. Instead, I just deployed to the dev stage and used that until I thought it was ready to be promoted to the prod stage. Maybe in a more enterprise environment, I would have spent more time on local testing.

All in all, I am happy how this project turned out. I didn’t follow the spec (https://help.dyn.com/remote-access-api/perform-update/) as close as I originally planned, but that is fine. It is functioning and working as expected. I feel like I learned a lot along the way, but I know I have areas I can improve on.

--

--

Will Dixon

Senior Site Reliability Engineer and technology hobbyist