Took me over a year to finish writing this monster of an article. 4,000+ words, 200+ links, and lots of research covering countless JavaScript runtimes and engines.
Please have a read! I guarantee you'll learn something new.
Thanks for putting in all this work. People should get more credit for writing good survey articles like yours.
One of the regrettable quirks of our industry is the way we replicate theoretically language-neutral components separately in multiple language ecosystems and verticals. I wish we didn't a separate npm and cargo. I wish the polyglot runtimes you mention (especially Graal) would see more adoption. I wish we didn't have both Duktape and MicroPython. There's nothing language-specific about efficient garbage collection or compact bytecode or package dependency resolution.
I totally agree on reinventing the wheel. I think our biggest tool towards that will be Node-API, which is the one low-level building block that JavaScript runtimes seem to agree on (and heck, I'm sure non-JavaScript engines could implement a decent chunk of Node-API). The Hermes team are currently interested in it (https://github.com/facebook/hermes/pull/1377#issuecomment-29...) for implementing heavy chunks of the ECMAScript spec like Intl and Temporal as plug-in features.
It's interesting that you bring up tooling like npm and Cargo, I've never thought of sharing those. Although I'm no great shakes at Rust, I really, really liked how Cargo did things (e.g. optional dependencies and rigid library structure) and it'd be incredible to have that for the JavaScript ecosystem.
Also do keep hearing nothing but great things about Graal, I think it's being slept on.
> I really, really liked how Cargo did things (e.g. optional dependencies and rigid library structure) and it'd be incredible to have that for the JavaScript ecosystem.
I always wondered if there is still an ROI for writing a well researched in depth article.
If the insight in the article is actually unique, it'll just get regurgitated by other writers and AIs a hundred times, maybe even with better visuals and diagrams and that'll be the article that gets all the clicks.
This is very true, in life in general. You do a good deed on the street right before your interview, and lo and behold, the person you did the good deed to turns out to be your interviewer! These things (be it an article or a good deed) can definitely pay off.
I had some faith this time as I saw a good handful of readers opening the email just moments after I'd posted the newsletter out!
And while I figured Hacker News would be a coin flip, for other sites that show link previews, I was optimistic that the cartoons would encourage a click, too. In a time where many writers are using soulless AI-generated poster images, just putting in a token effort to do something original goes a long way.
Author here – the return on investment will be modest, no doubt. I started the newsletter in order to share any thoughts on various topics without having to please a social media algorithm to get seen. Winning a few new readers for my efforts is all I'm really looking!
As for the article getting regurgitated, I am not too worried. It is simply an awfully long, awfully dense article for any LLM to synthesise more than a handful of bullet points out of, so I think any derivative product is going to make for a far less engrossing read. Indeed, if you try asking ChatGPT about runtimes right now, you'll get a pathetically shallow and mainstream analysis.
There may well be clickbait articles spawned from this, but I'm optimistic that people will find the source and be able to tell in an instance that it's the Real McCoy.
Naw - the internet enables "long-tail" content - but I say this as a person who occasionally writes VERY long "blog-posts" that are technically KB/how-to, magazine article length...
Yes, AI will scrape it and regurgitate it - but over-time it will reach people who need to know - plus it is also helpful for oneself...
Did you consider things like JScript.NET? jsc.exe still comes with every Windows installation, and JScript still runs on IIS server (I have built a few things with it).
And what about things like Adobe Photoshop, After Effects, Premiere, and Illustrator which can run Javascript using either Extendscript in older versions or V8 in modern versions.
> Did you consider things like JScript.NET? jsc.exe still comes with every Windows installation, and JScript still runs on IIS server (I have built a few things with it).
I've not heard of any of these, wow – very cool that it's still bundled with Windows. I suspect that most things related to JScript wouldn't fit the scope of "the last decade", though? I tended to focus on creation date when deciding what should be in scope.
> And what about things like Adobe Photoshop, After Effects, Premiere, and Illustrator which can run Javascript using either Extendscript in older versions or V8 in modern versions.
Hmm, these may well qualify, you're right. Maybe I can add them to the addendum. I'd want to do a bit of research to understand their timeline and genealogy before including them, though. Wrists beginning to hurt for today now, though!
I do recall now that Adobe PDFs can run JavaScript. That would've been worth a mention for sure.
This is the first time I have ever signed up for a newsletter and I did it without reading the HN comments or the fact that you wrote it or it took so long to write.
I really appreciate such deep dives, though I admittedly knew some of them because I was also obsessed much like you with the edge/ js engine space but still I learned a lot
I really liked reading through the blog. Please, take your time, rest and create such beautiful masterpieces in the future too, I would be waiting for them patiently!
Thank you for the kind words! Glad to get this one out of the way so that I can do some less ambitious articles for a change. But hope to make them good in their own way, too!
Thank you! I'd tend to default to Expo if building a mobile app in JavaScript these days, but Capacitor, Tauri, and NativeScript are all good options too.
As for JavaScript runtimes, having seen Lynx (and some of the internal work NativeScript is doing), I'd like to see more engine-agnostic runtimes rather than coupling each runtime to one engine.
it's interesting that a lot of innovation happened on mobile runtimes to deal with apples JIT restriction. what was that about and why didn't android have the constraint?
of course i can look it up, but you probably have a better insight than the slop i'll find off google.
thanks for the article. these days it's rare to see something so well researched and written while still being able to tell it was authored by a human. cheers
Apple has a JIT restriction because JIT introduces native code that was not present during app review, and app review is where they prohibit calling non-public APIs, at least historically.
Android doesn't have a JIT restriction. API restrictions are expected to be enforced at runtime, not through review time checks.
> Apple has a JIT restriction because JIT introduces native code that was not present during app review, and app review is where they prohibit calling non-public APIs, at least historically.
Another regrettable thing about our industry is the proliferation of locked-gate-in-an-open-field-tier security theater. Apple's PROT_EXEC restriction has zero security benefit: anything JIT-compiled code can do, interpreted code can do too.
(In the same vein, macOS Santa (used by many a tech company to police programs runnable on Apple developer endpoints) doesn't restrict script launches at all. The interpreters that Apple ships with macOS have built-in FFIs like Python ctypes that enables programs launched using these interpreters to do anything a Mach-O binary can do.)
While I respect the sweat and cleverness that went into making JS runtimes work efficiently while wearing Apple's no-PROT_EXEC hairshirt, none of this work ought to have been necessary. Imagine how much further ahead the industry would be if these big brains had focused on solving some other problem.
I totally agree that the lack of JIT has been a huge waste of human resources. NativeScript and React Native have had to move so many mountains to make viable software despite it.
The other reploies seem to have answered it! I've never understood why iOS allows JIT inside WKWebView yet not inside bare JavaScriptCore. Maybe because WKWebView runs in its own separate process while a JavaScriptCore instance would run in-process? As for Android, guess Google are just a little more gung-ho?
> thanks for the article. these days it's rare to see something so well researched and written while still being able to tell it was authored by a human. cheers
Really appreciated, thanks! I do despaire that the ease of generating content with AI disincentivises (and discourages) authors from expending real effort, but there's a satisfying inflection point at which you can be sure you've written something that an LLM couldn't possibly have come up with.
> For all the overlap in strategy, notable is the variety in engines underpinning all these runtimes. While Deno continues Node.js's tradition of using V8, we see Bun employing JavaScriptCore, WinterJS using SpiderMonkey, LLRT on QuickJS, and Cloudflare Workers on the tailor-made workerd. No longer is the backend solely a stage for Node.js and V8 – it's now fashionable to pick a runtime and engine optimised for the task.
I don't think this is completely accurate. While Cloudflare's workerd is a tailor-made runtime (equivalent level to deno/node/bun/etc.) it uses the V8 engine.
I see a few mentions of QuickJS, but they all refer to the fork of Bellard's QuickJS https://bellard.org/quickjs/, which I think deserves a mention. It seems to be still active (last release 2025-04-26, GitHub mirror at https://github.com/bellard/quickjs shows some activity).
Both are active now and they are diverging. It is kinda sad but Bellard seems to want to do his own thing. PRs are traded sometimes but as time goes on they will become different projects, I did suggest a name change (https://github.com/quickjs-ng/quickjs/discussions/1119)
That said, one thing I've heard a lot from NativeScript TSC members building on top of QuickJS is that QuickJS has staggeringly low overhead for calling C functions. In fact, I was shown benchmarks in which it calls native functions faster than JS functions (perhaps it's easier to optimise because JS functions have complications like closure capture to worry about). I imagine this has conceptual overlap with how V8's Fast API works.
I embedded it at a game studio for PC and game consoles a few years ago. We opted for JavaScript over Lua for reasons I can’t disclose, but runtime performance was sufficient for our needs. I liked having reference counting since the determinism gave our tech lead a bit of confidence before cert.
We had some performance issues with marshalling and unmarshalling, but that was mostly due to API design and how our codebase was structured. As per usual, optimizing scripts was the best bang for buck.
I didnt do a formal test, but we use quickjs-ng/rquickjs as the basis for our scripting engine. Even with 20+ rust modules loaded it takes less than 4ms to spawn and we spawn a lot of them (multiple per http request that runs through the proxy).
I had a good time being involved with a couple JavaScript runtimes which didn't make this list, most notably PythonMonkey [1] which embeds SpiderMonkey into Python and uses Python's event loop for its async stuff. Another interesting one was DCP which is sort of a pseudo js runtime that runs ontop of other js runtimes (including a custom sandboxed server-side runtime we made [3], but also any web browser), to provide cloud function like compute for js and wasm based workloads.
Unrelated to the article, and already well known, is Pyodide which is a Python runtime in JS/WASM. I shoved Pyodide into DCP so people could run Python workloads in web browsers [4]. Crazy stuff...
My analysis on the polyglot runtimes was relatively shallow and I did anticipate I might disappoint, sorry! Thanks for the additions!
I did begin a section on WASM runtimes, so Pyodide would've been of interest, but I could see the scope was likely to grow too much if I got into WASM so I held back.
Thank you very much, really appreciate it! There's a respectable article on engines (https://en.wikipedia.org/wiki/List_of_JavaScript_engines), but indeed, no article on runtimes. I did feel that they were under-discussed compared to engines, though, with most analysis focusing on browsers and Node-alikes, so wanted to add something to the literature.
Jscript was the most evil of all the JS implementations. It's the reason why MOST of the thing people hate about JS are still in the language.
They wanted to remove a bunch of the badly-designed stuff from the language, but MicroSoft had just finished implementing Jscript complete with all those issues at great expense. They would rather have a language with issues than change their implementation, so the world got stuck with the issues and there's now too much code on the web to change without something drastic.
I deliberately scoped it to the last decade because I felt it would be very hard to dig up information from around 1996 – would be better for me to leave that period to folks who were active developers around that time and lived through it all.
But I did make sure to give Chakra a few mentions!
GraalVM/GraalJS is one I am most impressed with. It 'just works' and I've been able to integrate it into my java web applications easily. I mostly use JS in java to run handlebars.js and test JS code library that we use both on the server and the client.
I've heard nothing but high praise for GraalJS, yeah. I wonder whether it'd make any sense inside mobile apps – not due to any feature in mind, but simply out of interest because that's more the area I'm using JavaScript. The first-class interop with Java would be great for Android, though I wonder how cheap its interop with iOS (C-family languages) would be.
This article provides good insight on the boom, but leaves out any insight on the inevitable bust - not on JS per se, but the distinct runtimes.
Deno and Bun seem to be two highly competitive runtimes, each VC backed and positioned against each other, but the fairly tale of multiple winners seems unlikely in a world that favors power laws.
So then, how do others see these ecosystems surviving over the next decade? What are the canaries? And, how interoperable will our code be?
WinterCG is trying to help standardize web APIs to address these concerns. So one strategy when writing server-side js is to stick to standard APIs as much as possible.
If you do want to leverage runtime-specific code you can isolate that code in separate modules, so if you ever do need to migrate off a particular runtime it's easier to identify/replace that code.
Ultimately it's all JavaScript, and since most of these runtimes are open source even if they're abandoned we might see community forks. Though even if your chosen runtime is completely without support, I don't see a migration off being an extremely urgent or difficult task for most projects.
A very good thought! Deno and Bun have this formidable funding for sure, but Node.js is still innovating. Things like type-stripping and require(esm) come to mind. I think as long as big companies use it for Electron, Node.js will continue to get investment. Even if the investment were to dry up, experience teaches us that old tools die hard.
I would hope that interoperability goes up. The React Native community are closing in on support for Node-API, and with that maybe we can start sharing native code between desktop and mobile. The WinterCG effort is also going well.
As for canaries, I think sustainability is the main thing to look at. What is the surface area of this tool, how much expertise and effort does it take to maintain it, and who is invested in its survival? The more we can share common implementations like Intl and Temporal, the easier it is for smaller players to keep up. And of course, the more the big players try to diverge (looking at you, Chrome), the harder it gets.
I would like to rectify that LLRT is mostly community contributed and one guy from AWS (Hi Richard!). There are many businesses using LLRT modules (I worked a lot on making it modular) and rquickjs.
I am a maintainer of both and I wrote a couple of modules myself. There is no better JS runtime for Rust IMO, it is based on quickjs-ng (a fork of quickjs initially due to quickjs inactivity but not both projects are active and have diverged) with a lot of the major Node/WinterJS APIs.
Oh right, thanks for the insight. Wondering how to rephrase it. Did AWS at least create it, or fund it?
rquickjs looks like a good one to include in the article. I've just edited it into the polyglot engines section (I think I've got a mix of engines and runtimes in there at this point anyway).
AWS created it and it is still under their name but under the labs thing which means it is a "side project". Unsure how I would phrase it, but just wanted to acknowledge the community and businesses that are building it (maybe stewardship?)
Missed Nombas ScriptEase, a 90’s-era commercial runtime most famous for allowing JavaScript to operate the James Webb Telescope. http://brent-noorda.com/nombas/us/index.htm
Very cool, though when I was qualifying the scope for "the last decade", I was doing it by creation date rather than usage date, so I think I get a pass on this one!
There's a whole section on PhoneGap/Cordova, where, ignoring creation date, even most of the milestone years are outside the "the last decade" timeframe. It's a very strange decision, and leads to things like gjs only being mentioned as an afterthought, which exemplifies the major blind spot that people who came out of the NPM world have for the use of JS in stuff like Firefox (which was doing Electron before Electron) and Gnome (if you've used Gnome in the last 10 years, you've seen JS in action on the desktop). The best explanation for this is that the latter two are so successful at doing what they aim to accomplish that people don't even know it's there, and when things are more about getting things done than they are about hype, then the hype crowd of course fixates on the hyped stuff instead (no matter how short-lived or obviously-doomed-from-the-start).
Thanks for the article! Do you have a solution, if any, to maintain links green? Anytime I write something with lots of links, I'm always afraid that there's basically no way to retrieve the original page after some time.
Thank you! It did cross my mind. I've archived a couple of the hard-won links in there that I had a feeling at great risk of rotting. But thankfully a lot of the links are to GitHub pages, so I'm not so worried about those.
Unfortunately, I think the article's readability would suffer if every single link were followed by an [archive] tag. Wikipedia have a better setup for providing both an archive link and a live link, which unfortunately the newsletter format doesn't really lend itself to.
I think copying the links as-is into the Wayback Machine will generally do in a pinch – just a bit of a pain to find the right month to roll back to.
At 17 years old, I think it's comfortably outside of the scope of the article I was focusing on (the last decade), but thanks for bringing it to my attention!
> Ejacs is based heavily on work by Brendan Eich (SpiderMonkey, Narcissus)
and Norris Boyd (Mozilla Rhino).
I'm not familiar with the term "context tracking", and management of the event loop is a bit lower level than I normally go, but I do know several runtimes use libuv (https://libuv.org) to handle asynchronous I/O, the same as Node.js (which it was created for).
I'm sure there will be JavaScript runtimes out there using some of Rust's asynchronous schedulers like tokio (https://docs.rs/tokio/latest/tokio/), too, but I wouldn't be surprised if a large number of them just do things bespoke.
Could be to draw upon the existing JavaScript ecosystem, or could just be because JavaScript is the language the dev knows best! I feel really nerfed when I have to move from JavaScript to other languages, personally, even if I can get things done in C++ and other languages.
For the same reason someone would use Lua or MicroPython.
If you're moving from a $0.50 to a $1.00 chip due to the overhead of using the higher-level language and save just one $200,000/year dev in the development process, you'd have to sell 400,000 units before the $0.50 chip would save you any money.
Even if you are moving from a $0.50 chip to a $3.00 chip, you can probably still save a significant amount of money on a lot of projects when you put together all the dev-years saved.
> Lastly, JavaScript continues to show its strength as a language for GUI programming, being employed in a variety of ways to develop native apps on mobile phones and Smart TVs, though with web view based apps still being the vogue on desktop.
Really? I thought over the last decade we let other languages into the browser but Javascript was so amazing that no one uses the other languages. Javascript has had all this competition on the UI layer it’s amazing it came out #1. Am I reading the implication correctly?
> I thought over the last decade we let other languages into the browser
Not so much! Rhino was an effort to rewrite Netscape Navigator in Java, and I expect that if that rewrite had taken off, Java might've gained a foothold (though indeed, there were Java applets for a time).
There were also Flash apps, and of course there are some significant games based on WebAssembly and low-level graphics APIs.
But ultimately JavaScript works well for GUI because it is so dynamic, allowing you to be less rigid in how you architect things. And by being a scripting language, it's super quick to iterate on.
You assume it was me that opened the page on a desktop or something with a primitive keyboard. I asked my agent to summarize and provide me the list of links to see if it was there. The list did not list bun. The code it executed through my phantomjs mcp server was simply:
Took me over a year to finish writing this monster of an article. 4,000+ words, 200+ links, and lots of research covering countless JavaScript runtimes and engines.
Please have a read! I guarantee you'll learn something new.
Thanks for putting in all this work. People should get more credit for writing good survey articles like yours.
One of the regrettable quirks of our industry is the way we replicate theoretically language-neutral components separately in multiple language ecosystems and verticals. I wish we didn't a separate npm and cargo. I wish the polyglot runtimes you mention (especially Graal) would see more adoption. I wish we didn't have both Duktape and MicroPython. There's nothing language-specific about efficient garbage collection or compact bytecode or package dependency resolution.
Thank you very much!
I totally agree on reinventing the wheel. I think our biggest tool towards that will be Node-API, which is the one low-level building block that JavaScript runtimes seem to agree on (and heck, I'm sure non-JavaScript engines could implement a decent chunk of Node-API). The Hermes team are currently interested in it (https://github.com/facebook/hermes/pull/1377#issuecomment-29...) for implementing heavy chunks of the ECMAScript spec like Intl and Temporal as plug-in features.
It's interesting that you bring up tooling like npm and Cargo, I've never thought of sharing those. Although I'm no great shakes at Rust, I really, really liked how Cargo did things (e.g. optional dependencies and rigid library structure) and it'd be incredible to have that for the JavaScript ecosystem.
Also do keep hearing nothing but great things about Graal, I think it's being slept on.
> I really, really liked how Cargo did things (e.g. optional dependencies and rigid library structure) and it'd be incredible to have that for the JavaScript ecosystem.
FYI: https://thenewstack.io/vites-creator-on-a-unified-javascript...
I'm keeping my eye on Void0, yeah! If anyone can live up to such promises, it'll be them.
They'll have a tough job getting sticky ecosystems like React Native to adopt, but hope to see them make a compelling case for their stack.
Just because he spent 1 year ???
I always wondered if there is still an ROI for writing a well researched in depth article.
If the insight in the article is actually unique, it'll just get regurgitated by other writers and AIs a hundred times, maybe even with better visuals and diagrams and that'll be the article that gets all the clicks.
The ROI can still be massive even if only a few people actually read it.
It just takes one person who read the article (or found it later via search) to get a huge ROI if it resonates just right.
A job opportunity, an invite to some event, a potential collaboration - the right reader can unlock all kinds of opportunities.
This is very true, in life in general. You do a good deed on the street right before your interview, and lo and behold, the person you did the good deed to turns out to be your interviewer! These things (be it an article or a good deed) can definitely pay off.
Maybe the ROI is something more personal than getting clicks?
It can be. But there is a certain existential dread, in the hours after sharing something, when you fear it might all have been toil in obscurity
I had some faith this time as I saw a good handful of readers opening the email just moments after I'd posted the newsletter out!
And while I figured Hacker News would be a coin flip, for other sites that show link previews, I was optimistic that the cartoons would encourage a click, too. In a time where many writers are using soulless AI-generated poster images, just putting in a token effort to do something original goes a long way.
I feel this way.
Author here – the return on investment will be modest, no doubt. I started the newsletter in order to share any thoughts on various topics without having to please a social media algorithm to get seen. Winning a few new readers for my efforts is all I'm really looking!
As for the article getting regurgitated, I am not too worried. It is simply an awfully long, awfully dense article for any LLM to synthesise more than a handful of bullet points out of, so I think any derivative product is going to make for a far less engrossing read. Indeed, if you try asking ChatGPT about runtimes right now, you'll get a pathetically shallow and mainstream analysis.
There may well be clickbait articles spawned from this, but I'm optimistic that people will find the source and be able to tell in an instance that it's the Real McCoy.
Naw - the internet enables "long-tail" content - but I say this as a person who occasionally writes VERY long "blog-posts" that are technically KB/how-to, magazine article length...
Yes, AI will scrape it and regurgitate it - but over-time it will reach people who need to know - plus it is also helpful for oneself...
Did you consider things like JScript.NET? jsc.exe still comes with every Windows installation, and JScript still runs on IIS server (I have built a few things with it).
And what about things like Adobe Photoshop, After Effects, Premiere, and Illustrator which can run Javascript using either Extendscript in older versions or V8 in modern versions.
https://developer.adobe.com/xd/uxp/uxp/versions/
https://developer.adobe.com/photoshop/uxp/2022/ps_reference/
> Did you consider things like JScript.NET? jsc.exe still comes with every Windows installation, and JScript still runs on IIS server (I have built a few things with it).
I've not heard of any of these, wow – very cool that it's still bundled with Windows. I suspect that most things related to JScript wouldn't fit the scope of "the last decade", though? I tended to focus on creation date when deciding what should be in scope.
> And what about things like Adobe Photoshop, After Effects, Premiere, and Illustrator which can run Javascript using either Extendscript in older versions or V8 in modern versions.
Hmm, these may well qualify, you're right. Maybe I can add them to the addendum. I'd want to do a bit of research to understand their timeline and genealogy before including them, though. Wrists beginning to hurt for today now, though!
I do recall now that Adobe PDFs can run JavaScript. That would've been worth a mention for sure.
This is the first time I have ever signed up for a newsletter and I did it without reading the HN comments or the fact that you wrote it or it took so long to write.
I really appreciate such deep dives, though I admittedly knew some of them because I was also obsessed much like you with the edge/ js engine space but still I learned a lot
I really liked reading through the blog. Please, take your time, rest and create such beautiful masterpieces in the future too, I would be waiting for them patiently!
Have a nice day.
Thank you for the kind words! Glad to get this one out of the way so that I can do some less ambitious articles for a change. But hope to make them good in their own way, too!
It was a very interesting read, thanks for the effort and sharing it with us.
I indeed did learn (a lot). Thank you sir for this monster:)
Incredible article. I never knew there were so many JavaScript runtimes out there, and that it was possible to run JavaScript on microcontrollers.
I'm relatively new to the JS/TS space and I'm building a mobile app that needs to run on both platforms. Also building a back end for it using Deno.
I thought this article was highly informative and useful. Great job, and thanks!
Thank you! I'd tend to default to Expo if building a mobile app in JavaScript these days, but Capacitor, Tauri, and NativeScript are all good options too.
[dead]
Fascinating read. Great work on the research. Where do you think it will go from here?
As for JavaScript runtimes, having seen Lynx (and some of the internal work NativeScript is doing), I'd like to see more engine-agnostic runtimes rather than coupling each runtime to one engine.
As for writing, I'd like to stick to more manageable topics in future so that I can write as a hobby rather than as daunting work. My previous "Frontend predictions for 2024" (https://buttondown.com/whatever_jamie/archive/frontend-predi...) was similarly thorough and took an exhausting rush to write up. But simpler articles like "Staying productive with chronic pain" (https://buttondown.com/whatever_jamie/archive/staying-produc...) were all possible to finish in a weekend.
There's probably a good story in Node-API, but I think I'd rather take on a research-light topic for a break first!
The article is awesome. Thanks for the mention to Wasmer and WinterJS.
Cool things coming in the pipeline!
it's interesting that a lot of innovation happened on mobile runtimes to deal with apples JIT restriction. what was that about and why didn't android have the constraint?
of course i can look it up, but you probably have a better insight than the slop i'll find off google.
thanks for the article. these days it's rare to see something so well researched and written while still being able to tell it was authored by a human. cheers
Apple has a JIT restriction because JIT introduces native code that was not present during app review, and app review is where they prohibit calling non-public APIs, at least historically.
Android doesn't have a JIT restriction. API restrictions are expected to be enforced at runtime, not through review time checks.
> Apple has a JIT restriction because JIT introduces native code that was not present during app review, and app review is where they prohibit calling non-public APIs, at least historically.
Another regrettable thing about our industry is the proliferation of locked-gate-in-an-open-field-tier security theater. Apple's PROT_EXEC restriction has zero security benefit: anything JIT-compiled code can do, interpreted code can do too.
(In the same vein, macOS Santa (used by many a tech company to police programs runnable on Apple developer endpoints) doesn't restrict script launches at all. The interpreters that Apple ships with macOS have built-in FFIs like Python ctypes that enables programs launched using these interpreters to do anything a Mach-O binary can do.)
While I respect the sweat and cleverness that went into making JS runtimes work efficiently while wearing Apple's no-PROT_EXEC hairshirt, none of this work ought to have been necessary. Imagine how much further ahead the industry would be if these big brains had focused on solving some other problem.
I totally agree that the lack of JIT has been a huge waste of human resources. NativeScript and React Native have had to move so many mountains to make viable software despite it.
The other reploies seem to have answered it! I've never understood why iOS allows JIT inside WKWebView yet not inside bare JavaScriptCore. Maybe because WKWebView runs in its own separate process while a JavaScriptCore instance would run in-process? As for Android, guess Google are just a little more gung-ho?
> thanks for the article. these days it's rare to see something so well researched and written while still being able to tell it was authored by a human. cheers
Really appreciated, thanks! I do despaire that the ease of generating content with AI disincentivises (and discourages) authors from expending real effort, but there's a satisfying inflection point at which you can be sure you've written something that an LLM couldn't possibly have come up with.
didn't read it, was too long and the parts i skimmed were dull.
It's fine to not like things, but what exactly is the point of your comment, other than being mean?
I dont know, but i'd love to know more about Javascript in general so I found it especially bad. Just my 2c.
> For all the overlap in strategy, notable is the variety in engines underpinning all these runtimes. While Deno continues Node.js's tradition of using V8, we see Bun employing JavaScriptCore, WinterJS using SpiderMonkey, LLRT on QuickJS, and Cloudflare Workers on the tailor-made workerd. No longer is the backend solely a stage for Node.js and V8 – it's now fashionable to pick a runtime and engine optimised for the task.
I don't think this is completely accurate. While Cloudflare's workerd is a tailor-made runtime (equivalent level to deno/node/bun/etc.) it uses the V8 engine.
https://github.com/cloudflare/workerd/blob/main/docs/v8-upda...
My mistake, thank you for the correction! I've updated the article to avoid any future readers walking away with the wrong information.
That is correct. It uses v8
Dang, I misunderstood that, thanks! Fixed.
I see a few mentions of QuickJS, but they all refer to the fork of Bellard's QuickJS https://bellard.org/quickjs/, which I think deserves a mention. It seems to be still active (last release 2025-04-26, GitHub mirror at https://github.com/bellard/quickjs shows some activity).
I had it in my head that quickjs-ng (https://github.com/quickjs-ng/quickjs) is now blessed as the official fork, with Bellard even making some contributions to it, but maybe I've got it wrong. Here's the latest (https://github.com/quickjs-ng/quickjs/discussions/258#discus...) clarifying the two forks, anyway.
Both are active now and they are diverging. It is kinda sad but Bellard seems to want to do his own thing. PRs are traded sometimes but as time goes on they will become different projects, I did suggest a name change (https://github.com/quickjs-ng/quickjs/discussions/1119)
Cool thanks for the historical context
Speaking of QuickJS, how does it compare to embedded Lua? In performance, memory overhead, C API, startup time?
No idea! The only benchmarks I'm aware of for QuickJS are against other JavaScript engines (https://bellard.org/quickjs/bench.html).
That said, one thing I've heard a lot from NativeScript TSC members building on top of QuickJS is that QuickJS has staggeringly low overhead for calling C functions. In fact, I was shown benchmarks in which it calls native functions faster than JS functions (perhaps it's easier to optimise because JS functions have complications like closure capture to worry about). I imagine this has conceptual overlap with how V8's Fast API works.
I embedded it at a game studio for PC and game consoles a few years ago. We opted for JavaScript over Lua for reasons I can’t disclose, but runtime performance was sufficient for our needs. I liked having reference counting since the determinism gave our tech lead a bit of confidence before cert.
We had some performance issues with marshalling and unmarshalling, but that was mostly due to API design and how our codebase was structured. As per usual, optimizing scripts was the best bang for buck.
I didnt do a formal test, but we use quickjs-ng/rquickjs as the basis for our scripting engine. Even with 20+ rust modules loaded it takes less than 4ms to spawn and we spawn a lot of them (multiple per http request that runs through the proxy).
I had a good time being involved with a couple JavaScript runtimes which didn't make this list, most notably PythonMonkey [1] which embeds SpiderMonkey into Python and uses Python's event loop for its async stuff. Another interesting one was DCP which is sort of a pseudo js runtime that runs ontop of other js runtimes (including a custom sandboxed server-side runtime we made [3], but also any web browser), to provide cloud function like compute for js and wasm based workloads.
Unrelated to the article, and already well known, is Pyodide which is a Python runtime in JS/WASM. I shoved Pyodide into DCP so people could run Python workloads in web browsers [4]. Crazy stuff...
1. https://pythonmonkey.io/ 2. https://distributive.network/workers 3. https://gitlab.com/Distributed-Compute-Protocol/dcp-native 4. https://willpringle.ca/blog/dcp/pyodide-worktime/
My analysis on the polyglot runtimes was relatively shallow and I did anticipate I might disappoint, sorry! Thanks for the additions!
I did begin a section on WASM runtimes, so Pyodide would've been of interest, but I could see the scope was likely to grow too much if I got into WASM so I held back.
This article deserves to be the basis of a Wikipedia page. It is so well written and full of references. Congratulations to the author.
Thank you very much, really appreciate it! There's a respectable article on engines (https://en.wikipedia.org/wiki/List_of_JavaScript_engines), but indeed, no article on runtimes. I did feel that they were under-discussed compared to engines, though, with most analysis focusing on browsers and Node-alikes, so wanted to add something to the literature.
Missing^1: JScript from MicroSoft (circa 1996). Not only was it in Internet Explorer, it could also be used server side (ASP) and for scripting (WSH).
Then there was JScript.NET (circa 2000)...
1. Not last decade but some of it's contemporaries are mentioned.
Jscript was the most evil of all the JS implementations. It's the reason why MOST of the thing people hate about JS are still in the language.
They wanted to remove a bunch of the badly-designed stuff from the language, but MicroSoft had just finished implementing Jscript complete with all those issues at great expense. They would rather have a language with issues than change their implementation, so the world got stuck with the issues and there's now too much code on the web to change without something drastic.
Mumbles about annoyances at hanging commas
I bet JSON would allow hanging commas if not for Jscript.
I deliberately scoped it to the last decade because I felt it would be very hard to dig up information from around 1996 – would be better for me to leave that period to folks who were active developers around that time and lived through it all.
But I did make sure to give Chakra a few mentions!
GraalVM/GraalJS is one I am most impressed with. It 'just works' and I've been able to integrate it into my java web applications easily. I mostly use JS in java to run handlebars.js and test JS code library that we use both on the server and the client.
I've heard nothing but high praise for GraalJS, yeah. I wonder whether it'd make any sense inside mobile apps – not due to any feature in mind, but simply out of interest because that's more the area I'm using JavaScript. The first-class interop with Java would be great for Android, though I wonder how cheap its interop with iOS (C-family languages) would be.
There's something special about the JS specs and test suite.
There are very few languages where you can write/run the same piece of code across so many different runtimes without major issues.
This article provides good insight on the boom, but leaves out any insight on the inevitable bust - not on JS per se, but the distinct runtimes.
Deno and Bun seem to be two highly competitive runtimes, each VC backed and positioned against each other, but the fairly tale of multiple winners seems unlikely in a world that favors power laws.
So then, how do others see these ecosystems surviving over the next decade? What are the canaries? And, how interoperable will our code be?
WinterCG is trying to help standardize web APIs to address these concerns. So one strategy when writing server-side js is to stick to standard APIs as much as possible.
If you do want to leverage runtime-specific code you can isolate that code in separate modules, so if you ever do need to migrate off a particular runtime it's easier to identify/replace that code.
Ultimately it's all JavaScript, and since most of these runtimes are open source even if they're abandoned we might see community forks. Though even if your chosen runtime is completely without support, I don't see a migration off being an extremely urgent or difficult task for most projects.
A very good thought! Deno and Bun have this formidable funding for sure, but Node.js is still innovating. Things like type-stripping and require(esm) come to mind. I think as long as big companies use it for Electron, Node.js will continue to get investment. Even if the investment were to dry up, experience teaches us that old tools die hard.
I would hope that interoperability goes up. The React Native community are closing in on support for Node-API, and with that maybe we can start sharing native code between desktop and mobile. The WinterCG effort is also going well.
As for canaries, I think sustainability is the main thing to look at. What is the surface area of this tool, how much expertise and effort does it take to maintain it, and who is invested in its survival? The more we can share common implementations like Intl and Temporal, the easier it is for smaller players to keep up. And of course, the more the big players try to diverge (looking at you, Chrome), the harder it gets.
I would like to rectify that LLRT is mostly community contributed and one guy from AWS (Hi Richard!). There are many businesses using LLRT modules (I worked a lot on making it modular) and rquickjs.
I am a maintainer of both and I wrote a couple of modules myself. There is no better JS runtime for Rust IMO, it is based on quickjs-ng (a fork of quickjs initially due to quickjs inactivity but not both projects are active and have diverged) with a lot of the major Node/WinterJS APIs.
Oh right, thanks for the insight. Wondering how to rephrase it. Did AWS at least create it, or fund it?
rquickjs looks like a good one to include in the article. I've just edited it into the polyglot engines section (I think I've got a mix of engines and runtimes in there at this point anyway).
AWS created it and it is still under their name but under the labs thing which means it is a "side project". Unsure how I would phrase it, but just wanted to acknowledge the community and businesses that are building it (maybe stewardship?)
Missed Nombas ScriptEase, a 90’s-era commercial runtime most famous for allowing JavaScript to operate the James Webb Telescope. http://brent-noorda.com/nombas/us/index.htm
Very cool, though when I was qualifying the scope for "the last decade", I was doing it by creation date rather than usage date, so I think I get a pass on this one!
There's a whole section on PhoneGap/Cordova, where, ignoring creation date, even most of the milestone years are outside the "the last decade" timeframe. It's a very strange decision, and leads to things like gjs only being mentioned as an afterthought, which exemplifies the major blind spot that people who came out of the NPM world have for the use of JS in stuff like Firefox (which was doing Electron before Electron) and Gnome (if you've used Gnome in the last 10 years, you've seen JS in action on the desktop). The best explanation for this is that the latter two are so successful at doing what they aim to accomplish that people don't even know it's there, and when things are more about getting things done than they are about hype, then the hype crowd of course fixates on the hyped stuff instead (no matter how short-lived or obviously-doomed-from-the-start).
Beautiful article. Thanks the author and whoever brought it to HN
Thanks for the article! Do you have a solution, if any, to maintain links green? Anytime I write something with lots of links, I'm always afraid that there's basically no way to retrieve the original page after some time.
Thank you! It did cross my mind. I've archived a couple of the hard-won links in there that I had a feeling at great risk of rotting. But thankfully a lot of the links are to GitHub pages, so I'm not so worried about those.
Unfortunately, I think the article's readability would suffer if every single link were followed by an [archive] tag. Wikipedia have a better setup for providing both an archive link and a live link, which unfortunately the newsletter format doesn't really lend itself to.
I think copying the links as-is into the Wayback Machine will generally do in a pinch – just a bit of a pain to find the right month to roll back to.
Web archive and pray
fair
> Rhino
There was a time when Java shipped with a JS interpreter, but not a JSON parser... unless you use the JS interpreter.
No mention of Ejacs‽
https://github.com/emacsattic/ejacs
At 17 years old, I think it's comfortably outside of the scope of the article I was focusing on (the last decade), but thanks for bringing it to my attention!
> Ejacs is based heavily on work by Brendan Eich (SpiderMonkey, Narcissus) and Norris Boyd (Mozilla Rhino).
I did mention all of these, at least!
I'm curious, how do these new runtimes handle async context tracking? Do they all rely on Async Hooks or implement their own solutions?
I'm not familiar with the term "context tracking", and management of the event loop is a bit lower level than I normally go, but I do know several runtimes use libuv (https://libuv.org) to handle asynchronous I/O, the same as Node.js (which it was created for).
I'm sure there will be JavaScript runtimes out there using some of Rust's asynchronous schedulers like tokio (https://docs.rs/tokio/latest/tokio/), too, but I wouldn't be surprised if a large number of them just do things bespoke.
No love for njs, the engine for running javascript inside of the nginx web server?
Why would someone use js on a microcontroller?
Could be to draw upon the existing JavaScript ecosystem, or could just be because JavaScript is the language the dev knows best! I feel really nerfed when I have to move from JavaScript to other languages, personally, even if I can get things done in C++ and other languages.
For the same reason someone would use Lua or MicroPython.
If you're moving from a $0.50 to a $1.00 chip due to the overhead of using the higher-level language and save just one $200,000/year dev in the development process, you'd have to sell 400,000 units before the $0.50 chip would save you any money.
Even if you are moving from a $0.50 chip to a $3.00 chip, you can probably still save a significant amount of money on a lot of projects when you put together all the dev-years saved.
> Lastly, JavaScript continues to show its strength as a language for GUI programming, being employed in a variety of ways to develop native apps on mobile phones and Smart TVs, though with web view based apps still being the vogue on desktop.
Really? I thought over the last decade we let other languages into the browser but Javascript was so amazing that no one uses the other languages. Javascript has had all this competition on the UI layer it’s amazing it came out #1. Am I reading the implication correctly?
> I thought over the last decade we let other languages into the browser
Not so much! Rhino was an effort to rewrite Netscape Navigator in Java, and I expect that if that rewrite had taken off, Java might've gained a foothold (though indeed, there were Java applets for a time).
There were also Flash apps, and of course there are some significant games based on WebAssembly and low-level graphics APIs.
But ultimately JavaScript works well for GUI because it is so dynamic, allowing you to be less rigid in how you architect things. And by being a scripting language, it's super quick to iterate on.
> JavaScript works well for GUI because it is so dynamic
Amazon moved to WebAssembly for their Prime Video application for both higher and less variable performance:
https://www.amazon.science/blog/how-prime-video-updates-its-...
https://www.infoq.com/presentations/prime-video-rust/
Java was never going to replace JS in a practical way because of security and the Java JIT taking forever to load and warm up.
I was being sarcastic in my post. I find bragging about Javascript as a flourishing UI language omitting quite a bit about why.
Eh, writing is hard!
And how!
[dead]
[dead]
[dead]
What!? No mention of bun?
But Bun is mentioned three times?
They should include the link to their site then: bun.com
I used document.querySelector(“a”) and couldn’t find mention of it, upon reading it thoroughly, they do mention it. My fault.
You ran javascript before doing a cmd+f? I guess you are the target audience for this article
You assume it was me that opened the page on a desktop or something with a primitive keyboard. I asked my agent to summarize and provide me the list of links to see if it was there. The list did not list bun. The code it executed through my phantomjs mcp server was simply:
Well i guess thats what you get for using mcp instead of just opening the article and reading it
Guess you should have used your eyes and brain instead of outsourcing it to a chat bot
How can you complain about the article after wrongly assessing it? That's next level AI slop
1. Check your AI's output before you share it with the world, so you don't waste everyone else's time.
2. The Bun project has two domains with identical content that don't redirect to each other: bun.com and bun.sh