The Effect of Memory Configuration on AWS Lambda’s Network Throughput

DPG Media
Level Up Coding
Published in
3 min readApr 7, 2022

--

Written by Geert Van Wauwe, Backend/Fullstack Developer

This post will investigate the effect of various memory configurations on the perceived network throughput of AWS Lambda. A brief introduction of AWS Lambda is given and the throughput during upload of a fixed size image is analyzed. Finally, I will do a cost analysis and summarize the findings.

What is AWS Lambda?

AWS Lambda is AWS’s implementation of a serverless function. A short-lived container that can execute custom code. It allows you to move fast, without having to provision a server and the maintenance burden that comes with it. Oh, and you only pay for the time your code is actually executing. What is there not to love?

The Problem: Slow Throughput Speed of Lambda

AWS Lambda is probably my favorite AWS service, and I have been using it extensively for over three years. Recently, I noticed something peculiar. A Lambda function that streams image content from A to B regularly took over 50 seconds to complete.

The code of this Lambda function was written in Python. The Lambda had a VPC configuration, and was configured with 256 MB memory.

The average transferred image size is about 20 MB, so I had expected most of the executions to end quickly. Let's say 2 to 3 seconds tops. Carefully examining the metrics, I noticed that a duration of 50 seconds was not exactly exceptional. Something was definitely going on here.

The documentation on network throughput speeds of Lambda is unfortunately rather limited. This blogpost briefly mentions:

Generally, CPU-bound Lambda functions see the most benefit when memory increases, whereas network-bound see the least.

Most documentation however, is focused on the relation between Lambda memory and CPU. Not a promising start… but the network throughput seemed so low that increasing the memory was worth a shot. Increasing the memory from 256 MB to 5120 MB immediately resulted in a severe drop in Lambda duration.

The configured memory was impacting the execution time, but by how much? Let’s set up an experiment

The experiment

To make the tests reproducible, I used the same 45 MB image in every test. I only changed the memory of the lambda function. I ran multiple tests for each memory configuration to get a good average duration. The average throughput is calculated based on the 45 MB image.

The results

Graphical Representation

Interpretation of the Results

One does not need to be a math-wiz to see where this is going. Increasing the memory of the AWS Lambda function increases throughput drastically. These experimental results suggest that the network throughput of AWS Lambda is impacted by the memory configuration, briefly mentioned here. However, since increasing memory also increases the CPU, as mentioned in the docs, it’s still possible that the CPU is the bottleneck.

Cost Analysis

AWS Lambda will bill you for every GB-s you are using1. So one would expect that increasing the memory, i.e. the GB part of GB-s, increases the cost. However, this can be offset by the reduced Lambda execution time (the s part of GB-s). By increasing the memory from 256 MB to 1536 MB the cost dropped because of the reduced execution time.

Benefits

  • faster execution time
  • lower cost
  • lower lambda timeout
  • faster retry possible

Increasing the memory of a Lambda function increases the CPU power and network throughput drastically.

Footnote

1: This is not completely true, you are not billed for the initialization of the Lambda function.

Originally published at https://www.softwareconviction.com on April 7, 2022.

--

--

We are the tech team behind the digital products of all DPG Media’s brands and internal apps!