šŸ§„ Bun Frameworks

4/16/2024 at 6:07:53 PM • Updated: 4/17/2024 at 8:58:41 AM

Well, by now you probably figured that I like to prefix the ā€œTheā€ in each blog post section, so letā€™s keep that going! Oh, also having an emoji for the title - yes, I know, Bunā€™s mascot is a, well, bun, but there is not bun emoji, so I had to improvise.

Anyway, letā€™ go back on topic!

The Problem

This blog post is actually about the performance between different Bun frameworks. I asked a question on X, formerly known as Twitter, but of course didnā€™t get an response, since I really donā€™t have a following there (or anywhere for that matter šŸ˜”).

Now, because my blog is still pretty new and needs more content, I decided to write a blog post about it. I mean, why not? Itā€™s a good topic, right? Right?

Letā€™s get started!

The Setup

As for the test tools, I will be using:

  • Laptop - MacBook Pro 16-inch, 2021, 32GB RAM with the Apple M1 Max chip on Sonoma 14.4.1.
  • Bun v1.1.3

You will find the full codebase on GitHub: github.com/bobalazek/bun-frameworks

One thing that I did differently in this case is, that instead of tools like locust, goose, bombaridier or similar, I decided to quickly write my own benchmarking tool in Bun of course. At the end it doesnā€™t matter, as we basically only care about the relative performance difference between the different frameworks.

Here is the list of frameworks that I will be testing:

The Results

This is by no means a scientific test, but it should give you a rough idea of the performance of the different Bun frameworks. I have ran the test and took a result that seems to be the most consistent. Each framework receives 100.000 requests and the response text for each request is ā€œHello, World!ā€œ.

FrameworkAverage RPSCold start (ms)Average (ms)Median (ms)Mean (ms)75th perc. (ms)95th perc. (ms)99th perc. (ms)Std. (ms)
bun2877822.9890.0350.0330.0350.0350.0420.0650.024
elysiajs2876210.0700.0350.0320.0350.0350.0430.0740.025
hono282959.5960.0350.0330.0350.0360.0420.0660.023
hattip277466.4420.0360.0340.0360.0360.0430.0660.026
fastify2252712.6100.0440.0400.0440.0430.0560.0880.042
h3210794.3340.0470.0440.0470.0470.0580.0910.033
koa2087214.2360.0480.0430.0480.0460.0630.0950.047
hapi1891917.7660.0530.0470.0530.0510.0640.1020.052
express187799.8640.0530.0480.0530.0520.0650.1010.048
  • RPS (Requests Per Second) - purely looking at the number of requests per second, we can see that ElysiaJS, Hono and Bun are very close to each other, with ElysiaJS being the fastest. Hattip is also very close, with a negligible difference. H3 did surprise me a bit, as from what I read it should be faster, but itā€™s not. Fastify, H3 and Koa are slower, with Hapi and Express being the slowest as expected.
  • Cold Start - the time it takes for the first request to be served. Hattip takes the crown Headers, with ElysiaJS and Hono being very close. Bun surprisingly is the slowest of all here, which I am not really sure why. I ran the benchmark multiple times and I always get the same results.
  • Other Response Times - ElysiaJS, Hono and Bun are again very close to each other, with basically rounding-error differences. Hattip a tiny bit slower, but still very close. Fastify, H3 and Koa are slower, with Hapi and Express being the slowest again.

The Conclusion

I am not really sure what to make of these results. I was expecting ElysiaJS to be the fastest, but I didnā€™t expect Hono and Bun to be so close. I also didnā€™t expect Hattip to be so close to the top. H3 was a bit of a surprise, as I was expecting it to be faster.

At the end of the day, raw speed is not the only thing that matters. There are many other factors that you need to consider when choosing a framework, such as ease of use, community support, documentation, etc.

The End

I hope you enjoyed this blog post and found it useful. If you have any questions, feel free to reach out to me on X, formerly known as Twitter, or via email.

Until next time! šŸš€