Dismiss
  • Toggle Theme
  • View as Mobile

FAST Serverless JavaScript on AWS Lambda with LLRT

I recently did a spike on a new runtime for AWS Lambda called LLRT (Low Latency Runtime).

It came out about a year ago but I hadn't dug into it until recently.

Why is LLRT cool?

LLRT offers several compelling advantages over traditional Node.js runtimes:

  1. Lightning-Fast Cold Starts: LLRT is built from the ground up to minimize cold start latency, making your Lambda functions respond almost instantly.
  2. ARM64 Optimization: Built for ARM64 architecture, LLRT delivers better performance while reducing costs compared to x86-based runtimes.
  3. Minimal Memory Footprint: The runtime is incredibly lightweight, allowing your functions to start up with minimal memory overhead.

Gotchas

Not every node feature is supported yet, see https://github.com/awslabs/llrt?tab=readme-ov-file#compatibility-matrix & https://github.com/awslabs/llrt/issues for the feature matrix and issues.

Dream state

I'm imagining a world where your code AST is parsed and you can automatically infer whether or not the LLRT runtime will work for your code and then it's automatically applied for all of these benefits listed above.

One day...

Anyway here are the project details

LLRT Lambda API Example

This project demonstrates how to build a serverless API using AWS Lambda with LLRT (Low Latency Runtime) - a lightweight JavaScript runtime optimized for AWS Lambda.

The function are behind simple HTTP API Gateway endpoints.

Features

  • Fast cold starts with LLRT runtime
  • Multiple API endpoints (/hello and /goodbye)
  • ARM64 architecture for better performance/cost ratio
  • Custom Serverless Framework plugin for deployment validation
  • Automated LLRT bootstrap setup

Prerequisites

  • Node.js 18 or later
  • AWS CLI configured with appropriate credentials
  • Serverless Framework CLI (npm install -g serverless)

Quick Start

  1. Clone the repository
  2. Install dependencies:
npm install
  1. Set up LLRT bootstrap:
npm run setup
  1. Deploy to AWS:
npm run deploy

API Endpoints

Hello Endpoint

  • URL: GET /hello
  • Query Parameters:
    • name (optional) - Name to greet
  • Example Response:
{
  "message": "Hello World from LLRT!",
  "timestamp": "2024-03-14T12:00:00.000Z",
  "path": "/hello",
  "method": "GET"
}

Goodbye Endpoint

  • URL: GET /goodbye
  • Query Parameters:
    • name (optional) - Name to bid farewell
  • Example Response:
{
  "message": "Goodbye World, thanks for using LLRT!",
  "timestamp": "2024-03-14T12:00:00.000Z",
  "path": "/goodbye",
  "method": "GET"
}

Project Structure

.
├── bootstrap # LLRT runtime
├── node_modules # Node.js dependencies
├── package.json
├── serverless.yml
├── src # Source code
   ├── hello.js
   └── goodbye.js
├── scripts # Deployment scripts
   ├── setup.js
   └── deploy.js

Deployment

To deploy the service, run:

npm run deploy

Cleanup

To delete the service, run:

npm run remove

Notes

Performance results

You can perf test with node perf-test.js

Average response time: 54.05ms. Not too shabby.

───────────────────────────────
Starting performance tests...
Configuration:
- Concurrent requests: 10
- Total requests per endpoint: 100
- Warmup requests: 5

Running performance test for hello endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........

Results for hello:
Total requests: 100
Min response time: 34.72ms
Max response time: 290.14ms
Average response time: 73.94ms
95th percentile: 210.99ms
99th percentile: 290.14ms

Running performance test for goodbye endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........

Results for goodbye:
Total requests: 100
Min response time: 29.46ms
Max response time: 156.11ms
Average response time: 54.05ms
95th percentile: 116.24ms
99th percentile: 156.11ms

Second run. Average response time: 54.05ms

───────────────────────────────
Starting performance tests...
Configuration:
- Concurrent requests: 10
- Total requests per endpoint: 100
- Warmup requests: 5

Running performance test for hello endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........

Results for hello:
Total requests: 100
Min response time: 32.55ms
Max response time: 278.29ms
Average response time: 70.03ms
95th percentile: 151.55ms
99th percentile: 278.29ms

Running performance test for goodbye endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........

Results for goodbye:
Total requests: 100
Min response time: 31.30ms
Max response time: 278.21ms
Average response time: 44.51ms
95th percentile: 67.78ms
99th percentile: 278.21ms

The code is here

https://github.com/DavidWells/spike-llrt-runtime

Enjoy.