What is this project about?
Bun has captivated my attention for quite some time now. But it wasn't until version 1.0.0 release that I have decided to dive a bit deeper. Being a huge fan of Docker, I took a look at their official Bun image and was confused by its size.
That sparked my interest to contribute to the project, to make the smallest docker bun image possible. What started as a weekend project took me a bit more, but I have succeeded. Join me on my journey of security testing, docker optimizations, and so much more.
If you want to check out the smallest Bun image, you can hop straight to dejangegic/bun on Docker Hub.
What is Bun, and why should I care?
I suppose most of you are already familiar with it, so feel free to skip this section.
Bun heads out to provide a set of tooling including a runtime, transpiler, bundler, test runner, and package manager. So, why would you care about it? Node.js and NPM exist, right? Bun has the unfair advantage of being a new project so they don't have all the legacy code and compatibility requirements slowing them down. Bun team boasts that their runtime, which is written in Zig, can process 4X times more requests per second than Node.js.
But that is not even my favorite part, bun install
replaces npm install
so well that it makes you never want to look back. For example, if you need to install a package for the first time, it is faster than npm, but if you have already installed it in another project you won't even have to re-download it as it is cached.
Besides that, as I mentioned they have rebuilt the whole tooling stack from scratch, and in my opinion they did a great job. Give it a shot, you won't regret it.
Why?
Security
This one is by far the most important for me but is the hardest to "sell" to people. Performance graphs are pretty, informative, and get the point across. I can't say the same for vulnerability reports.
Although I have tested several products, at the end I have settled on Trivy for vulnerability scanning. Trivy is simple to use, besides scanning language-specific files, it also does an OS scan and returns CVEs. And being open-source is a huge plus in my book.
We also have to take into account that Alpine
has 0 publicly known vulnerabilities, while Debian
has quite a few.
Scan results
Scanning is as simple as running trivy image oven/bun
Scanned Image | Low | Medium | High | Critical | Total |
Official Bun | 66 | 16 | 13 | 1 | 96 |
debian (Official base image) | 50 | 9 | 1 | 0 | 60 |
dejangegic/Bun | 0 | 0 | 0 | 0 | 0 |
I need to stress that just because a vulnerability exists doesn't mean it's exploitable. But I'm still going to sleep better at night knowing my containers are as safe as they can be. And here's a pretty graph as promised.
Resources
Smaller images require less disk space and network bandwidth than their larger counterparts. But that's kind of obvious, and does that matter now when storage and bandwidth are cheaper than ever? Well, for a single image, it might not. But when you're dealing with hundreds of images, and when you can make each one a couple of orders of magnitude smaller (I shrunk a Go image from 1150MB to 3MB for example) the differences sure do add up.
How was it done?
Our starting point is the original Bun image, which is Debian-based and 290MB
in size. Not a huge image, but could be smaller.
To make the smallest image I could, I had to employ a few different methods. This has led me to create 4 different images, which I have tagged appropriately. Those are:
bunx
default
compress
smallest
bunx
just has a different base image, and is the only image with bunx in it. More details in "Base Image" sectiondefault
is the same as bunx
, except it omits bunx. More details in "Omitting Bunx in production" sectioncompress
is the default
image with compressed Bun binaries, making it even smaller. More details in "Compression" sectionsmallest
is just a higher level of compression, smaller size is traded for longer start-up time.
Multi-stage builds
Docker has a wonderful feature that allows us to build images in multiple stages. For example, you can compile your app in one container and then transfer just the binary to a fresh one without the need to ship the compiler with it.
Here we'll see how it's used. Multi-stage builds are signified by the as
keyword and multiple FROM
statements.
Base image
The first place to start when building a new image should be the base image. In this case, the original Bun image is based on Debian which is not exactly the smallest base image. So I've done what anyone else would; try to make my version using Alpine as a base image, as it is only ~5MB
.
The problem with using Alpine with Bun are the glibc dependencies, or rather, the lack of them because Alpine uses a smaller musl libc. For those who aren't C fans (don't worry, I fit in the same camp) glibc and musl are libraries on which a lot of applications rely, not just those written in C. Bun is one of those apps, and in that case, musl just won't cut it, it needs glibc.
Now, at that point, I started testing out slimmer images like ubuntu:focal
which is 76MB, or redhat ubi9:micro
which has a footprint of only 26MB. And those worked fine, but I wanted it even smaller. That's where alpine-glibc by frolvlad comes in. It's nothing more than vanilla alpine with glibc on top of it, and it works great. Clocking in at merely 16MB, it's the smallest image I could find. Migrating to alpine-glibc has shrunk the image by 75MB
, or ~25%.
Base Image | Size |
debian | ~112MB |
debian-slim | ~78MB |
ubi9-micro | ~25MB |
alpine-glibc | ~16MB |
alpine | ~5MB |
This brings us to how bunx
was built.
FROM oven/bun:${VERSION} as bun
FROM frolvlad/alpine-glibc
COPY --from=bun /usr/local/bin/bun /usr/local/bin/
COPY --from=bun /usr/local/bin/bunx /usr/local/bin/
${VERSION}
is replaced with the version you want to build, for example FROM oven/bun:1.0.0 as bun
. Here we can see multi-stage builds in action. We take the official bun image and copy over the Bun as well as Bunx binaries. And that's it, we're done!
Omitting Bunx in production
If you ever wanted to run a package without installing it to node_modules or globally, than you're probably familiar with npx. Npx is a package runner, and yes you may run it on packages installed using npm too. Bunx is the same thing as Npx, only faster. Everything that Npx and Bunx can do, a script does better in production. And Bunx is the same size as the Bun binary, so removing it is a no-brainer.
Just doing this has further decreased the image by another 103MB
, which is no small feat for such a simple step.
And here's how we did that.
FROM oven/bun:${VERSION} as bun
FROM frolvlad/alpine-glibc
COPY --from=bun /usr/local/bin/bun /usr/local/bin/
Painfully simple. Same as the previous one, excluding Bunx.
Compression
Compressed image
The only logical next step that came to my mind was compressing the Bun binary itself. It was after all, larger than the rest of the system put together.
And I can't think of a better tool for compressing binaries than upx, which has been in development since 1996!
Now, for the compressed
image I have run upx --all-methods --no-lzma
which skips LZMA compression as it significantly increases the start-up time if used. This brought us to a further ~67MB
reduction in size, or, around 17% of the original image. This does increase start-up time to just under 250ms
on my system, but I don't see it being a problem on a production server that does not need to hot-reload every couple of minutes.
Dockerfile for compressed
FROM oven/bun:${VERSION} as bun
FROM alpine:3.18 as alpine
RUN apk add upx
COPY --from=bun /usr/local/bin/bun /usr/local/bin/
WORKDIR /usr/local/bin
# Compress bun binary
RUN upx --all-methods --no-lzma bun
FROM frolvlad/alpine-glibc
COPY --from=alpine /usr/local/bin/bun /usr/local/bin/
Now here we have an additional build step, how exciting. Let's quickly go over this one.
So just as before the image with the original binaries is designated as bun
, but this time we copy the binary to an intermediary alpine container (doesn't matter which distro) where we compress it using upx
after which it goes into its final location.
Now, why use the second image at all? We need to install upx so we can compress the image, and there we have 3 ways to do that.
- First is just doing it in the first
oven/bun
which is convenient, but that image is prone to change. The Bun team might choose to migrate to a different base image and that would mess up our whole build process. - The second** approach would be to do it in the final stage. But that means we need to install upx there which defeats the purpose of compressing if we need to carry that install around.
- That leads us to the Third approach which we have already seen utilized, to use a second image to compress the binaries.
Smallest image
That should have been enough, I could have called it a day as the results have already far exceeded my expectations. But there was one more thing that needed to be done, use LZMA too with upx --all-methods bun
. Now, I know I could have used upx -o ./bestUltaBrute2 --best --ultra-brute --all-methods bun
if I wanted to be over-the-top and throw everything I had at it, but adding so much time to compression for no more than a couple dozen kilobytes just didn't make sense.
And that's how the smallest
tag reached 40MB
, making it only 13.7% of the original size, or 8.5x smaller.
FROM oven/bun:${VERSION} as bun
FROM alpine:3.18 as alpine
RUN apk add upx
COPY --from=bun /usr/local/bin/bun /usr/local/bin/
WORKDIR /usr/local/bin
- RUN upx --all-methods --no-lzma bun
+ RUN upx --all-methods bun
FROM frolvlad/alpine-glibc
COPY --from=alpine /usr/local/bin/bun /usr/local/bin/
I don't know what you expected. This is getting boring, we just removed --no-lzma
and got an image that's almost 25%
smaller.
Results
First of all, the vulnerabilities have been reduced to 0 on all images! Even if the size stayed the same, I would still be happy with just that.
As for size, here's a graph showing different tags compared to the original. Note how the zero-compromise bunx
image is smaller by ~25%, it functions the same as the original, just without the extra security risks.
And that's what I call an absolute win! I haven't even dreamt of results this good when I started this project.
How to use it
"You have drowned us in theory and your process, show us how to use it already!" I hear you say, and you're justified. Let's start by setting up an example project.
Let's suppose we have a project we want to containerize with the following file structure:
.
├── ./app.js
├── ./anotherJsFile.js
├── ./models
├── ./node_modules
├── ./.env
├── ./package.json
└── ./package-lock.json
Now we need to add a Dockerfile, and a .dockerignore file to exclude the sensitive files. To help you out, I'll provide the .dockerignore later on not clutter this guide any further.
So here's how we'll start with the Dockerfile:
# Feel free to use any version you want, or just simply use the latest
FROM dejangegic/bun:1.0.0-smallest
# set working directory to /app
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN bun install
# Copy all files
COPY . .
# Run the app on container start (optional)
CMD ["bun", "app.js"]
We have copied the package.json
and package-lock.json
files first because of Docker caching mechanism where it will try to skip re-building all steps until the changed one. In this case that will make sure that the bun install
step is only executed if there's a dependency change.
There's not really any difference with this image or if you were using official Bun or even Node.js.
If you want to check out the smallest Bun image, you can hop straight to dejangegic/bun on Docker Hub.