I recently did a spike on a new runtime for AWS Lambda called LLRT (Low Latency Runtime).
It came out about a year ago but I hadn't dug into it until recently.
LLRT offers several compelling advantages over traditional Node.js runtimes:
Not every node feature is supported yet, see https://github.com/awslabs/llrt?tab=readme-ov-file#compatibility-matrix & https://github.com/awslabs/llrt/issues for the feature matrix and issues.
I'm imagining a world where your code AST is parsed and you can automatically infer whether or not the LLRT runtime will work for your code and then it's automatically applied for all of these benefits listed above.
One day...
Anyway here are the project details
This project demonstrates how to build a serverless API using AWS Lambda with LLRT (Low Latency Runtime) - a lightweight JavaScript runtime optimized for AWS Lambda.
The function are behind simple HTTP API Gateway endpoints.
npm install -g serverless
)npm install
npm run setup
npm run deploy
name
(optional) - Name to greet{
"message": "Hello World from LLRT!",
"timestamp": "2024-03-14T12:00:00.000Z",
"path": "/hello",
"method": "GET"
}
name
(optional) - Name to bid farewell{
"message": "Goodbye World, thanks for using LLRT!",
"timestamp": "2024-03-14T12:00:00.000Z",
"path": "/goodbye",
"method": "GET"
}
.
├── bootstrap # LLRT runtime
├── node_modules # Node.js dependencies
├── package.json
├── serverless.yml
├── src # Source code
│ ├── hello.js
│ └── goodbye.js
├── scripts # Deployment scripts
│ ├── setup.js
│ └── deploy.js
To deploy the service, run:
npm run deploy
To delete the service, run:
npm run remove
You can perf test with node perf-test.js
Average response time: 54.05ms. Not too shabby.
───────────────────────────────
Starting performance tests...
Configuration:
- Concurrent requests: 10
- Total requests per endpoint: 100
- Warmup requests: 5
Running performance test for hello endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........
Results for hello:
Total requests: 100
Min response time: 34.72ms
Max response time: 290.14ms
Average response time: 73.94ms
95th percentile: 210.99ms
99th percentile: 290.14ms
Running performance test for goodbye endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........
Results for goodbye:
Total requests: 100
Min response time: 29.46ms
Max response time: 156.11ms
Average response time: 54.05ms
95th percentile: 116.24ms
99th percentile: 156.11ms
Second run. Average response time: 54.05ms
───────────────────────────────
Starting performance tests...
Configuration:
- Concurrent requests: 10
- Total requests per endpoint: 100
- Warmup requests: 5
Running performance test for hello endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........
Results for hello:
Total requests: 100
Min response time: 32.55ms
Max response time: 278.29ms
Average response time: 70.03ms
95th percentile: 151.55ms
99th percentile: 278.29ms
Running performance test for goodbye endpoint...
Warming up...
Running 100 requests with 10 concurrent requests...
..........
Results for goodbye:
Total requests: 100
Min response time: 31.30ms
Max response time: 278.21ms
Average response time: 44.51ms
95th percentile: 67.78ms
99th percentile: 278.21ms
https://github.com/DavidWells/spike-llrt-runtime
Enjoy.