Show HN: I'm rewriting a web server written in Rust for speed and ease of use

ferron.sh

59 points by dorianniemiec 10 hours ago

Hello! I got quite some feedback on a web server I'm building, so I'm rewriting the server to be faster and easier to use.

I (and maybe some other contributors?) have optimized the web server performance, especially for static file serving and reverse proxying (the last use case I optimized for very recently).

I also picked a different configuration format and specification, what I believe is easier to write.

Automatic TLS is also enabled by default out of the box, you don't need to even enable it manually, like it was in the original server I was building.

Yesterday, I released the first release candidate of my web server's rewrite. I'm so excited for this. I have even seen some serving websites with the rewritten web server, even if the rewrite was in beta.

Any feedback is welcome!

selectnull 8 hours ago

There is something funny going on in the benchmarking section. If you look at the charts, they don't benchmark the same servers in 4 examples.

Each of the 4 charts have data for Ferron and Caddy, but then include data for lighttpd, apache, nginx and traefik selectively for each chart, such that each chart has exactly four selected servers.

That doesn't inspire confidence.

  • crote 7 hours ago

    It's also using their own benchmarking tool, rather than one of the dozens of existing tools. Doesn't mean they are cheating, but it is a bit suspicious.

  • troupo 8 hours ago

    > That doesn't inspire confidence.

    The problems start even higher on the page in "The problem with popular web servers" section that doesn't inspire confidence either.

    From "nginx configs can become verbose" (because nginx is not "just" a web server [1]) to non-sequiturs like "Many popular web servers (including Apache and NGINX) are written in programming languages and use libraries that aren't designed for memory safety. This caused many issues, such as Heartbleed in OpenSSL"

    [1] Sidetrack: https://x.com/isamlambert/status/1979337340096262619

    Until ~2015, GitHub Pages hosted over 2 million websites on 2 servers with a multi-million-line nginx.conf, edited and reloaded per deploy. This worked incredibly well, with github.io ranking as the 140th most visited domain on the web at the time.

    Nginx performance is fine (and probably that's why it's not included in the static page "benchmark")

    • sim7c00 7 hours ago

      its funny he mentions unsafe code in apache and nginx and then complains about openSSL bug (one thats more than 10 years old btw).

      if this is a sense of the logic put into the application, no memory safe language will save it from the terrible bugs!

      • dorianniemiec 4 hours ago

        Heartbleed might be more than 10 years old, but it was a serious vulnerability indeed...

        From https://www.heartbleed.com

        > The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.

        Also, the program being memory-safe doesn't mean it's bug-free, other bugs not related to memory safety exist (like path traversals are due to improper sanitation or checking of the input).

Etheryte 7 hours ago

I know it's not popular to care about these things these days, but please consider a different installation mechanism than curl piped into sudo bash. It's irresponsible and normalizes a practice that never should've happened.

  • QuantumNomad_ 7 hours ago

    They do offer other installation methods already.

    Installation via package managers (Debian/Ubuntu), using repo provided by ferron

    https://ferron.sh/docs/installation/debian

    Installation as a Docker container

    https://ferron.sh/docs/installation/docker

    And more.

    • Etheryte 6 hours ago

      That's true, but that's not what's front and center. Curl-sudo-bash is the first thing you see on the site, all the other options are close to the bottom of the page. Defaults matter and people tend to use whatever is the first option presented to them unless they have a good reason to do otherwise.

      • olalonde 6 hours ago

        > using repo provided by ferron

        This poses a similar security risk to executing the "curl-sudo-bash".

        • QuantumNomad_ 6 hours ago

          Yeah I point that out too in a sibling comment.

          My personal opinion is that curl pipe to bash is not much worse than any other third-party binary installation method. (Third-party meaning from a source other than the package repo of your distro, or other than brew, or other than some kind of official App Store like thing on your platform.)

          If this project isn’t already available in the official package repos of various distros, it will eventually be. And for those more cautious among us you will probably want to wait until that point in time.

          For me personally I wouldn’t have any big concerns about the curl pipe bash install method on some of my servers. On my personal laptop (macOS), I’d probably rather build it from source (which is also a method available since it’s open source).

        • dorianniemiec 4 hours ago

          Yeah, it can be risky if Ferron servers get somehow compromised...

          The best bet would be using official distro repositories...

          But for now, I'm providing the .deb packages, so that people can easily install Ferron on Debian and alike.

  • maccard 5 hours ago

    I care about these things.

    But this is overblown. What’s your threat model here, you’re downloading a random thing from the internet and executing it. 99% of people are on single user machines, so root access doesn’t help, you’re screwed just by executing the thing if it’s malicious. Doing this is no worse than installing and running a random deb, or running npm install

    • yjftsjthsd-h an hour ago

      It's not just a security thing. If you install something via curl|bash, how do you uninstall it? How do you update it? Do you know what it did to your machine? What config files it touched?

  • olalonde 7 hours ago

    Can we stop with this nonsense already? If you trust them enough to run their server code, why wouldn't you trust them with the installation script?

    • earthnail 7 hours ago

      Because untrustworthy websites can piggyback on the brand name.

      "Download ffmpeg here: sudo bash -c ..."

      And then the installation script from our malicious site installs ffmpeg just fine, plus some stuff you have no idea about. And you never know that you've just been hacked.

      • dns_snek 6 hours ago

        Can you repeat this mental exercise for every other installation method you can think of? e.g. distributing deb/rpm files, distributing AppImages, asking users to add your custom repository and signing key?

        (Yes I know that the last one has built-in benefits for automatic updates but that's not going to protect you on initial installation and its benefits can be replicated in a more portable way in any other auto-update mechanism with a similar amount of effort)

        ((And if you have the patience to set up a custom repository, you can simplify initial installation process using a "curl|bash" script))

      • QuantumNomad_ 7 hours ago

        If you get your install instructions from an untrustworthy website, there’s nothing preventing them from telling you to use a third-party apt repository or ppa that gives you a malicious version of the thing.

        There’s not really a difference between curl piped to bash, and installing packages from a third-party package repository that the distro maintainers have no involvement in with.

    • adastra22 7 hours ago

      I don’t trust them enough to run as root.

      • udev4096 7 hours ago

        But you have to. Nginx, caddy, traefik, etc cannot run without root or even if you can, it would be way more limiting

        • QuantumNomad_ 7 hours ago

          Only for binding to ports under 1024 really, like 80 (http) and 443 (https). Once it has bound to the ports it can drop down to running as a low-privilege user (usually named www or httpd or similar).

          On Linux you can allow your program to bind to those ports even without running the program itself as root.

          https://superuser.com/questions/710253/allow-non-root-proces...

          • dorianniemiec 4 hours ago

            When installed for example with the installer script, Ferron would run on a specialized user for running the web server. Ferron itself would also have "CAP_NET_BIND_SERVICE" capability set on its binary, so that it doesn't have to run as root.

        • adastra22 6 hours ago

          1) That isn’t true.

          2) Even if it were, I’m not going to do so while evaluating an unknown program.

    • alexnewman 7 hours ago

      Read how Tls works. Many people Can mitm. That’s why we sign applications

      • dns_snek 6 hours ago

        If they can MITM the installation script delivered over HTTPS, they can also MITM the website delivered over HTTPS.

        You can have 10 step instructions for users to add your PGP signing key and install your APT repository, but what difference does it make? None at all. A malicious website will copy your instructions and replace the signing key and the repository URL with their own.

sceptic123 4 hours ago

"I don't like nginx config so I'm going to write a whole application just so that I don't have to write nginx config".

Maybe just write an nginx config generator instead?

It's also interesting that the actual config looks quite a lot like nginx config.

earthnail 8 hours ago

Edit: just tried it for serving a fastapi. It's fantastic. Instant TLS via Let's Encrypt. There may be other webservers that are equally easy, but this one is certainly easier than Apache or ngninx, which I used so far. Love it.

--

Reach out to the guys at Kamal. They wrote their own reverse proxy because they thought Traefik was too complex, but they might be super happy about yours if Ferron is more powerful yet easy to configure because it might solve more of Kamal’s problems.

Not affiliated with Kamal at all, just an idea.

  • dorianniemiec 4 hours ago

    Thank you so much! I want to put a line from your comment on Ferron's website as social proof. :)

  • zsoltkacsandi 8 hours ago

    They wrote their proxy because the declarative configuration of the existing proxies does not fit into their deployment flow.

habibur 7 hours ago

Looking at the graphs, I would recommend it would have been better to market it as "just as performant as nginx and htproxy" instead of "faster than all ...". While highlighting the simplicity as the added benefit above those all.

  • dorianniemiec 4 hours ago

    Thank you for the feedback!

    I did the reverse proxy benchmarks for NGINX, when someone opened a GitHub issue about missing NGINX benchmarks, and asked about benchmark comparisons. It turned out that yeah, Ferron is close to NGINX's reverse proxy performance.

  • amelius 7 hours ago

    But nginx has acquired a lot of features over the years, which has pros and maybe also cons.

rglullis 7 hours ago

> Any feedback is welcome!

Read https://www.joelonsoftware.com/2006/12/09/simplicity/ and ask yourself if you are truly solving anyone's problem or if you are just looking for a way to rationalize the amount of time you are spending on a hobby.

  • ramon156 7 hours ago

    Wow, what a harsh comment. You make it sound like we should squeeze any form of efficiency out of people or you're wasting time

    • rglullis 6 hours ago

      > like we should squeeze any form of efficiency

      The complete opposite. It's OP that's trying to "optimize the web server for reverse proxying and static file serving", when what we have out there is more than enough.

      > or you're wasting time

      "Wasting time" is not a problem. If OP is doing working on things because it brings them pleasure and they are hoping to learn from it, more power for them. What bugs me about these types of posts is when people are set on the "build a better mouse trap" mentality and want others to validate them.

      It may sound "harsh" to you, but if I came up asking for "any type of feedback" when I'm trying to figure out if the idea is worth persuing, I'd be pretty upset if I kept chasing an invisible dragon because the community was more concerned about "hurting my feelings" instead of being upfront and give some warning like this might be interesting to you but it's not solving any real pain point. Keep that in mind when deciding if work on this will be worthwhile.

      • dorianniemiec 3 hours ago

        > It's OP that's trying to "optimize the web server for reverse proxying and static file serving", when what we have out there is more than enough.

        I have optimized it, so it would be faster than the original server I have been working on.

        > (...) give some warning like this might be interesting to you but it's not solving any real pain point. Keep that in mind when deciding if work on this will be worthwhile.

        If you feel the project isn't solving a real pain point for you, you don't have to use it! I was showcasing my web server to interested people on Hacker News.

  • dorianniemiec 4 hours ago

    Oh, thank you for bringing me up the blog post about simplicity, that was an interesting read!

  • udev4096 7 hours ago

    It's good to have as many web servers as possible out there. Stop being so harsh and touch some grass

    • rglullis 6 hours ago

      > It's good to have as many web servers as possible out there.

      The problem space of "web servers to serve static files and reverse proxy" is fairly small, how many differing solutions and designs would be required to satisfy your idea of "as many as possible"?

      At what cost? For what benefit?

      Again: if OP wants to work on this because they take joy in it, fine. But be honest about it (to themselves and to others) instead of coming up with all sorts of ratioinalizations and biased comparisons when talking about the alternatives.

mynewaccount00 8 hours ago

> Security is imperative

> Install with sudo curl bash

  • hacker_homie 7 hours ago

    This is kinda funny, but what is a better alternative for new projects on Linux?

    • arccy 7 hours ago

      it's a rust project which tries to claim the ability to build static binaries, you should be able to just download the server binary.

      • natrys 7 hours ago

        Yes it seems the binaries are here: https://ferron.sh/download

        I will say that though, it's probably not rational to be okay with blindly running some opaque binary from a website, but then flip out when it comes to running an install script from the same people and domain behind the same software. At least from security PoV I don't see how there should be any difference, but it's true that install scripts can be opinionated and litter your system by putting files in unwanted places so nevertheless there are strong arguments outside of security.

    • gregoriol 7 hours ago

      Why not the usual package repositories and distribution by the official ones?

      • jraph 7 hours ago

        That's a slow process and you need someone to do the packaging, either yourself or a volunteer, and this for each distro. Which is not trivial to master and requires time. The "new" qualifier in the parent comment is key here.

        Open build service [1] / openSUSE Build Service [2] might help a bit there though, providing a tool to automate packaging for different distributions.

        [1] https://openbuildservice.org/

        [2] https://build.opensuse.org/

      • PufPufPuf 7 hours ago

        Most Linux distributions won't package an unknown project. Chicken and egg problem. You could create your own PPA but that is basically the same as sudo curl bash in terms of security.

  • jagged-chisel 7 hours ago

    How’s that worse than downloading a random installer?

  • theandrewbailey 7 hours ago

    It's the Linux equivalent of downloading and running random binaries in Windows.

    • voidUpdate 7 hours ago

      *running as administrator

      • magackame 6 hours ago

        Gonna steal all your files, passwords and crypto as a regular user anyway?

austin-cheney 7 hours ago

I wrote my own web server from scratch last year the exact same reasons: starting from scratch with Apache and NGINX is too painful for my needs.

Here are my learnings:

* TLS (HTTPS) can be easily enabled by default, but it requires certificates. This requires a learning curve for the application developer but can be automated away from the user.

* The TLS certs will not be trusted by default until they are added to the OS and browser trust stores. In most cases this can be fully automated. This is most simple in Windows, but Firefox still makes use of its own trust store. Linux requires use of a package to add certs to each browser trust store and sudo to add to the OS. Self signed certs cannot be trusted in OSX with automation and requires the user to manually add the certs to the keychain.

* Everything executes faster when WebSockets are preferred over HTTP. An HTTP server is not required to run a WebSocket server allowing them to run in parallel. If the server is listening for the WebSocket handshake message and determines the connection to instead be HTTP it can allow both WebSocket and HTTP support from the same port.

* Complete user configuration and preferences for an HTTP or WebSocket server can be a tiny JSON object, including proxy and redirection support by a variety of addressable criteria. Traffic redirection should be identical for WebSocks and HTTP both from the users perspective as well as the internal execution.

* The server application can come online in a fraction of a second. New servers coming online will also take just milliseconds if not from certificate creation.

  • dorianniemiec 3 hours ago

    Oh, nice to meet you!

    > TLS (HTTPS) can be easily enabled by default, but it requires certificates. This requires a learning curve for the application developer but can be automated away from the user.

    Yeah, these certificates can be obtained from Let's Encrypt automatically.

    > Everything executes faster when WebSockets are preferred over HTTP. An HTTP server is not required to run a WebSocket server allowing them to run in parallel. If the server is listening for the WebSocket handshake message and determines the connection to instead be HTTP it can allow both WebSocket and HTTP support from the same port.

    Oh, seems like an interesting observation!

k_bx 8 hours ago

I was previously waiting for River https://github.com/memorysafety/river/ to take off, it's built on top of previously open-sourced library by Cloudflare for revese-proxying, but just like many other "grant-based" projects it just died when funding stopped.

I really like the spirit and simplicity of Ferron, will try it out when I have a chance. Been waiting for gradually throw out nginx for a while now, nothing clicked all the checkboxes.

  • dorianniemiec 3 hours ago

    Thank you! Excited to see what will you serve with Ferron.

ansc 5 hours ago

Great to see! Would love to try it, but I depend on graceful updates of configuration (i.e. adding and removing backends primarily). I can't find anything about that. Is it supported, either through updating configs or through API?

  • dorianniemiec 3 hours ago

    Thank you! Yes, graceful restarts in Ferron are supported on Linux, Unix, and alike. You just need to send a SIGHUP signal to Ferron process, or simply do "systemctl reload ferron" or "/etc/init.d/ferron reload" as root.

npodbielski 4 hours ago

So I have to connect my domain to your IP adres? Why there is perfectly fine HTTP auth method for let's encrypt. This is strange and not necessary.

  • dorianniemiec 4 hours ago

    Well, you're probably talking about the automatic TLS demo. People installing Ferron on their servers don't have to use this demo!

    • npodbielski 41 minutes ago

      No, I do not think so. In the linked page there is mention:

      "Point a subdomain named ferrondemo of your domain name (for example, ferrondemo.example.com) to either: CNAME demo.ferron.sh or A 194.110.4.223"

      This is really strange and make my Spidey sense tingle. If the goal is to just point your domain to this server it should not require DNS auth. Just HTTP is fine. DNS sure if you want optimize reverse proxy, because that would be also possible to do via http auth for every subdomain separately. If you just need some www server quickly pointing your domain to some other dude domain is not the way to go.

      This feels weird.

yincong0822 6 hours ago

That’s awesome — congrats on reaching the release candidate stage! I’m curious about the performance improvements you mentioned. Did you benchmark against other Go web servers like Caddy or fasthttp? Also really like that you’ve made automatic TLS the default — that’s one of those “quality of life” features that make a huge difference for users.

I’m working on an open-source project myself (AI-focused), and I’ve been exploring efficient ways to serve streaming responses — so I’d love to hear more about how your server handles concurrency or large responses.

  • dorianniemiec 3 hours ago

    Thank you!

    > Did you benchmark against other Go web servers like Caddy or fasthttp?

    I have already benchmarked Ferron against Caddy! :)

    > so I’d love to hear more about how your server handles concurrency or large responses.

    Under the hood, Ferron uses Monoio asynchronous runtime.

    From Monoio's GitHub repository (https://github.com/bytedance/monoio):

    > Moreover, Monoio is designed with a thread-per-core model in mind. Users do not need to worry about tasks being Send or Sync, as thread local storage can be used safely. In other words, the data does not escape the thread on await points, unlike on work-stealing runtimes such as Tokio. > For example, if we were to write a load balancer like NGINX, we would write it in a thread-per-core way. The thread local data does not need to be shared between threads, so the Sync and Send do not need to be implemented in the first place.

    Ferron uses an event-driven concurrency model (provided by Monoio), with multiple threads being spread across CPU cores.

GrayShade 8 hours ago

Hey! Sorry, I didn't get the chance to test it yet (like I promised when you launched), but can you say more about the rewrite? The title made me think you're porting it from Rust to another language :-).

  • dorianniemiec 3 hours ago

    No problem! I'm rewriting the codebase of Ferron (the rewrite is still in Rust), to follow some suggestions people made for the web server (faster async runtime, different configuration format). The original codebase would have been bit hard to work on to follow these suggestions...

supermatt 7 hours ago

Looks awesome, but the docs page seems to be returning a 200 yet is completely empty and is showing `x-ferron-cache: HIT` header. Maybe a misconfiguration somewhere?

  • atraac 7 hours ago

    Works perfectly fine here in Brave/Chromium

    • supermatt 7 hours ago

      Not sure if this will help to debug. Definitely not working for me in Safari. Could be a downstream cache I guess? The browser is using iCloud Private relay. I have "disable cache" checked in the network inspector. The only plugin installed is 1password, but I get the same problem when disabled. Restarted browser with same issue. Seems to work when in private mode, but not without.

      Summary URL: https://ferron.sh/docs Status: 200 Source: Network

      Request Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8 Accept-Encoding: gzip, deflate, br Accept-Language: en-GB,en;q=0.9 Priority: u=0, i Sec-Fetch-Dest: document Sec-Fetch-Mode: navigate Sec-Fetch-Site: none User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.6 Safari/605.1.15

      Response Accept-Ranges: bytes Cache-Control: public, max-age=900 Content-Encoding: br Content-Security-Policy: default-src 'self'; style-src 'self' 'unsafe-inline'; object-src 'none'; img-src 'self' data:; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://analytics.ferron.sh; connect-src 'self' https://analytics.ferron.sh Content-Type: text/html Date: Tue, 21 Oct 2025 10:07:46 GMT ETag: W/"ba17d6fadf70c9f0f3b08511cd897f939b6130afbed2906b841119cd7fe17a39-br" Server: Ferron Strict-Transport-Security: max-age=31536000; includeSubDomains; preload Vary: Accept-Encoding, If-Match, If-None-Match, Range X-Content-Type-Options: nosniff x-ferron-cache: HIT X-Frame-Options: deny

luckydata 8 hours ago

what are the advantages vs something like https://caddyserver.com?

  • nicce 8 hours ago

    It is about min-maxing. If you have backend written in Rust which uses Hyper, for example, Caddy will be bottleneck.

    Depending of course from the type of the backend (is it limited by other I/O and Caddy bottleneck does not matter)

  • Quarrel 8 hours ago

    I've never used ferron, but if you look at the graphs, he gives comparisons.

    So, I guess, performance + easy of use. Obviously, caddy is much more mature though.

rvz 8 hours ago

This has a good chance to succeed.

Good luck.

jokethrowaway 8 hours ago

Kudos!

This is great, I started working on a similar project but never had the discipline to sit through all the edge cases.

Maybe I'll start building it on top of ferron!

I would love to have a minimalistic DIY serverless platform where I can compile rust functions (or anything else, as long as it matches the type signature) to a .so, dynamically load the .so and run the code when a certain path is hit.

You could even add JS support relatively easily with v8 isolates.

Lots of potential!

  • dorianniemiec 3 hours ago

    Thank you! :)

    Wishing the best for your concept too!