This is very cool. BTW, when developing single HTML file apps, instead of localStorage, one can use the HTML as the source of truth, so the user can just save/save-as to persist. I had mentioned my quick and dirty attempt at an image gallery that is a self-contained html file and some really liked the concept, if not the "app" itself: https://news.ycombinator.com/item?id=41877482
Does using save as retain changes you make in the HTML (like aedits on a contenteditable element)? In my experience it does not. When you save, the originally loaded html will be saved, any changes in html stay in your browser.
My bad, I should have mentioned that you should choose the option to save the complete web page. It becomes default on Firefox once you select it. Not sure about Chrome and others.
Great idea, as usual killed in its infancy by lacking UA UX. We even already have an API to ask the user not to close the web page, because there's unsaved work; but it wouldn't work with "Save as...", because you can't detect that, and the browser won't do it for you, even if it knows you're editing a local file.
Same for all of the pointless cookie banners - they could've been UA prompts instead, putting the user in charge of setting a policy ("always trust example.com", "never trust example.net", "accept example.org for this session", etc). But building such prompts into a browser would've been a nuisance in 1997... So we ended up with that nuisance anyway, just enshrined by shortsighted laws, that target each and every website - rather than the three remaining browser engine vendors.
It's not killed IMHO, I actively use a fleet of such single HTML file apps that I save locally and even share with people. My initial comment has an example of a gallery, works wonderfully.
Well my issue is you forget to hit Ctrl-S/Cmd-S and your changes are gone. I've been using computers since early 90s and haven't found one program that does that; heck many apps don't even have an explicit "save" button, saving (and syncing) is transparent.
The web "browser" wasn't "intended" for this use case, hence the issue. This could be easily fixed though -- just like cookies.
You can also use localstorage/indexeddb and keep syncing it with a script element of type text/json. On-load, see if localstorage is gone (different browser, domain, etc.) and restore from the script element if that's the case.
I explicitly do not want such a thing in many of my HTML-apps, but one could add it with relative ease.
Not so different from Word/Excel/Lotus 1-2-3 from the 90s! I like that most software saves-as-I-type now, but that was the workflow back then. Ctrl+S was the most used key combo on my keyboard back then. Now, it's `:w<Enter>`.
The mental model my kids have for work is that typing or even thinking is itself a finished product. For my generation that idea of a conscious action of saving your work on a computer made me think more about what I was doing and how I was doing it. But I am an old.
If you go back in time, there were editors that by default would save the previous version of a file to .bak, each time you saved the current status. The fear of accidental saving or editing and hence overwriting old good stuff was higher than accidentally loosing new good stuff. There was less protection of system files, config files etc, so chances were you would brick apps or even the OS. Things got more forgiving since.
Yes, absoultely. Saving data you don't want saved and overwriting data you want to retain are just as bad as not saving data you want to keep.
Keeping a scratch file to restore from unexpected applications exits (crash, power loss, etc.) is fine but beyond that I expect to be in control of when and where things are saved.
> one has to adapt to the modern times
I expect my computers to adapt to my requirements, not the other way around.
At my first job, one of my responsibilities was to write the product manuals. My boss would set a timer next to me and instruct me to ctrl-S every time it went off.
Corrective action from having lost work too many times :-)
I mean - yeah, honestly, god forbid. Requiring manual saves with limited change history (or none at all) was the bad old days. That was bad UI/UX, literally everybody had a “oops I forgot to save” and a “oops I saved and I didn’t mean to” horror story. Things are better now.
I wouldn't say it was totally awful. At least, prior to having an Undo option or perhaps an undo but that can only go back 3 steps, saving as before making any large changes was a pretty common workflow. I might end up with 50 versions of a document numbered incrementally by the time I was finished. that is still a necessary workflow for certain types of documents. I don't necessarily want everything saved automatically all the time.
There is a pretty newish filesystem API around. You could probably make auto saving possible via that - at least after a prompt for the filesystem permission.
Or keeping it as it. That's fine too. It just came to mind
You could combine localStorage and manual save. Anything unsaved goes to localStorage, on save write to html and clear local storage. Best of both worlds?
If you construct the file to be able to save a copy of itself with updated data, then it can do so automatically without any user interaction, either on a timer or in response to some event.
You write Javascript to read and write to HTML, like, develop a single page app that treats HTML as the view and the database, and restores state on DOMContentLoad from the HTML. Super easy and maintainable for micro apps.
You can also use the contenteditable attribute and use no JS, so you basically have a notepad.
Do you know if a remote backup service was written?
I am going through SimpleBackup code, (and if not exists) I would like to contribute remote agent for backup. Any pointers for that implementation would be appreciated.
Yes, I wrote one [1] and others did too, e.g. [2] and [3].
The gist of it, as mentioned in [4], is that you need to have a web server that implements checkStatus and saveConfig PUTs, and PUT and DELETE for saveBoard.
This is very neat! I'd be interested to see something like this with a saving mechanism reminiscent of TiddlyWiki [0], which is saved as a portable HTML file. Documents that contain their own editors like this are really neat for offline use and long-term storage.
I tried this script (which Claude came up with) to save file with all it's changes. The steps are
1. Put this script in html file and add a save/download button to trigger it
2. Set `contenteditable` on you editable elements
That's it. Now make changes on the page and click download button to save the page with your changes. This should allow seeing your work without necessarily depending on JS.
The script:
<script>
function downloadHTMLFile() {
// Get the current HTML content
const html = document.documentElement.outerHTML;
// Create a temporary link element
const link = document.createElement('a');
link.setAttribute('download', 'example-page.html');
// Encode the HTML content as a data URI
const encodedContent = encodeURIComponent(html);
link.setAttribute('href', 'data:text/html;charset=utf-8,' + encodedContent);
// Append the link to the DOM and click it
document.body.appendChild(link);
link.click();
// Remove the temporary link element
document.body.removeChild(link);
}
</script>
I have implemented Google Drive and MS OneDrive support in my one-file video player app. I'm using it for reading, but I think it could be used to push back changes to a file for saving.
I really think we are as developers underusing setups like this. If you could somehow figure out a simple but effective sync between devices then that would be able to cover many use-cases.
Maybe we just need an sqlite with better support for replicas? Then people have one tiny server with a bunch of sqlite’s to which the apps can sync?
One major failing of WebDAV for these use cases is that the spec requires you to `PUT` the whole resource/file in order to save changes. This isn't too bad when the individual files are small, but as your single file apps grow, this means a lot of data transfer without the comforts of differential uploads.
I feel GitHub gists are a very good store for data: writable, versioned, shareable and read-only for non authors, workable, support for arbitrary data including binary
I tried that. The problem is that you will always need a middle-end (a separate server between your stand-alone front end application and the GitHub server). You can't just distribute a stand-alone app and let it talk to GitHub directly. Or well you can but only if you get a token with *FULL* scope for all repositories. What I as a developer need is a scope for only one repository. That's how stand-alone and especially browser-based applications can remain safe. You don't want to store tokens with full permissions in multiple apps.
They're the backend for Noteself, such I've been using for a few years now as my life organising note taking tool.
I have it setup self-hosted, and as long as I have internet I can connect to it and update it. Of I don't have internet I can browse the contents as per the most recently cached version.
(I can also save it to a single HTML file and back that up in numerous off site locations).
That would mean manual busywork every time you start/end a session. If you ever forget one of those steps, your work becomes out of sync and that’s extra work to sort it out. Depending on when you notice, it may take you several hours to fix. Not having to do things manually is what computers are for.
You could build a sync feature that requires some background process on each client along with likely complex configuration for peer-to-peer sharing, or you could build a web backend that runs on a server. There are good reasons why everyone does the latter.
> If you could somehow figure out a simple but effective sync between devices
If only there were a technology one could use to “serve” information from a central storage mechanism, then update this “server” in a deterministic fashion, possibly with security as part of the feature set…
I love it. I use Trello as my 2nd brain but it means that I can't do anything offline. Where I can see loving this is if I write a little converter so I can take a Trello board JSON export, load it into Nullboard and work on it offline, then a thing that goes back the other way and creates a Trello board from my NBX file.
Maybe if I put the original trello card ID at the bottom of each NBX "note" and then synched any text back as a new comment on that card, and the list ID in the title of each list and adding any notes without a Trello card link as new cards to that list, it would be a pretty automated way to get a bunch of edits back into Trello where I could tidy up with copy/paste.
Rock on!! Forked the repo and have my new local version pinned.
Glad to see this on here. I've been using this for 3 years now with my club to track race course participants for a 50K. Radio operators at each station call in bib numbers, and we drag their bib from their last waypoint to the new one. There is a laptop in the van mirrored to a TV outside the van. Friends and family can then check on their particpant.
I basically modified the CSS a bit so you can fit multiple cards in a row. nullboard as a board import/export feature to a simple json file. I have small python script that generates the columns (way-points) and cards (bib numbers). Then I can import that json to start that years race. While I would like more features (time tracking), it's a rather simple tool that can be easily operated offline and requires no resources other than a web browser.
I wish there was some browser solution for apps like this where you could save and share your app state between your own devices, and push/share that state with others, all without any server backend involvement e.g. so you could have a Kanban board shared with others and the state would be stored locally and/or in bring-your-own cloud storage.
There's so many apps like this that could be simple, but for robust state saving involve setting up and maintaining a backend (e.g. with security patches, backups, performance monitoring). There's also the privacy implications on your data being store on someone's server, and the risk of data leaks.
It's like there's a key part of the internet that's missing.
Something like this could be a browser extension? This exists?
`roamingStorage` as a relative to `localStorage` sort of like Windows' "Local App Data" versus "Roaming App Data' would be nice to have in theory.
Of course even if you kept it to the simple KV store interface like `localStorage` you'd need to define sync semantics and conflict resolution mechanics.
Then you'd have to solve all the security concerns of which pages get access to `roamingStorage` and how it determines "same app" and "same user" to avoid rogue apps exfiltrating data from other apps and users.
It would be neat to find an architecture to solve such things and see it added as a web standard.
> Of course even if you kept it to the simple KV store interface like `localStorage` you'd need to define sync semantics and conflict resolution mechanics.
It would be great to have this standardized. Custom two way syncing is a nightmare to implement correctly, and it doesn't make sense for apps to have to keep reinventing the wheel here.
It is a factor in why CRDT/OT posts and libraries get regularly upvoted on HN. A lot of us want a good, easy to use baseline. A lot of us are searching for that and/or thinking we can build that.
Part of why there's always some reinventing the wheel in this space, unfortunately is that this also seems to be one of the harder problems to generalize. There's always going to be domain specifics to your models or your users or users' idea of your models that is going to need some custom sync and conflict resolution work. Sync has a lot more application specifics than most of us want.
That said, yeah, if there was a good simple building block "base line" to start with that met a nice 80/20 rule plateau, it would be great for giving apps an easy place to start and the tools to build application and domain specifics as they grow/learn their domain.
(One such 80/20 place to start if the idea was just a simple KV store might be a basic "Last Write Wins" approach and a simple API to read older versions when necessary. You can build a lot of the cool CRDT/OT stuff on top of a store that simple. Many starting apps LWW is generally a good place to start for the basics. It doesn't solve all the syncing/conflict resolution problems, you'll still need to write app-specific code at some point, but that could be a place to start. It's basically the place most of the simplest NoSQL databases started.)
> It is a factor in why CRDT/OT posts and libraries get regularly upvoted on HN. A lot of us want a good, easy to use baseline. A lot of us are searching for that and/or thinking we can build that.
Do you have any views on the easiest way to do two-way syncing for a web app if you want to avoid relying on propriety services, or a complex niche framework? This comes up for me every few years and I'm always disappointed there isn't a no-brainer way yet to get a 80% good enough solution.
For a couple years I thought PouchDB might give us a strong relatively standardized 80% solution, but then the CouchDB ecosystem kind of exploded (IBM bought Cloudant and pivoted it to a much worse IBM product; Couchbase continued to pivot away from CouchDB compatibility; Apache's politics got real confused real quickly about what the next version of CouchDB should even be and tried to rewrite it in six different languages).
I still think the multi-version document database approach is probably the easiest 80% solution baseline to work with for many projects, especially with a mixture of developer skill levels and document types. It doesn't save you from needing to solve conflicts, of course, but the default behavior (newest version wins) is a predictable one and generally just works until it doesn't work anymore by which point you should have a better idea of your application's specific domains and which conflicts are more important to resolve than others.
It's unfortunate that "Mongo-compatible" better won the marketplace, because CouchDB had a better focus on standardizing the sync experience between DBs and "Couch-compatible" was nice while it lasted (not nearly long enough).
I think the CRDT/OT library builders got a little bit too lost for too many years thinking that the "conflict-free" initial "C" in CRDT was fate or manifest destiny rather than hopes and dreams (to see dashed against the hard rocks of reality). As someone whose introduction to CRDT/OT was specifically from the lens of source control, I know that I was a skeptic that they'd ever truly achieve "conflict-free". Conflict resolution in source control will almost always be a human-in-the-loop touch point. I don't think you can solve for "conflict-free", I think you just try for more and more ways to sweep conflicts under the rug and I think that's a pretty good summary for part of why we've seen a lot of increasingly complex frameworks in the space that feel like they need increasingly more PhDs in the room to build apps on top of.
I think we're finally starting to see the other side of that mountain, as more (not all, but a lot more) of the developers working in that space do start to realize that you can't eliminate conflicts entirely in a general way for every application domain, you can just offer better tools for how to manage them when they naturally occur. The last few libraries that I've seen on HN have given me hope that we are starting to see more pragmatic and fewer dogmatic projects in that space and maybe soon one will feel like a real 80% good enough contender for general application development.
Amusingly, or perhaps predictively, I think the ones that are going to get there to feeling like an 80% good enough are going to be the ones that most look like a multi-version JSON document database at the high-level API and maybe also in the low level storage mechanics (to run anywhere localStorage does and/or any Mongo-compatible NoSQL store). In theory though, maybe that CRDT/OT-based version gives you a lot of knobs under the hood in between storage and high level specifics to help push individual applications from 80% to 90%+ if you learn how to configure it. That would be better than just an 80% if that happened, and is a reason to stay excited about the CRDT/OT space.
I think the other side of the wishlist is also hoping for better peer-to-peer/device-to-device solutions for the web and web browsers. There was a flurry of work that happened there during the cryptocoin boom that never quite built the 80% good enough solutions for general applications or that weren't too heavily tied to cryptocoin projects that added proprietary complexity. I'm afraid that as the cryptocoin boom has faded a lot of those projects are already abandoned and we are going to see another spike in interest in such tools again for a while. (Unfortunately AI doesn't need peer-to-peer, it needs big pipes to large central datacenters again.)
> If syncing is enabled, the data is synced to any Chrome browser that the user is logged into. If disabled, it behaves like storage.local. Chrome stores the data locally when the browser is offline and resumes syncing when it's back online. The quota limitation is approximately 100 KB, 8 KB per item.
As of writing this there are 113 previous comments, none of which mention "metrics," "Scrum," or "velocity."
I hope any Scrum Master who comes across this thread takes the absence of those terms as an indicator of the value of those things and all the related feature bloat that bedevils Agile project management tools.
I run Planka in an LXC container on Proxmox, but this looks useful (despite being 'beta') for anyone who just wants an absolutely no frills local-first GUI for simple task management.
The README mentions that "Trello wasn't bad", but storing this type of data in the cloud wasn't desirable. Well, Planka is the answer to that.
One observation: Kanban is all about limiting the work in progress. That’s really its foundation. WIP limit is the main means for controlling and improving overall workflow effectiveness.
I would argue that boards not offering a WIP limit are not really “Kanban” boards, as they defeat the very goal of Kanban.
Something comes in, I put it on the board to get it out of my head. Then, when appropriate, I go to the board, rearrange the top 5 or so items by priority, then start on the top item.
Many things never get done, but they have turned out to lower priority, by definition.
If I am waiting for something to happen on a current task, I put it second or third in the queue.
Cool! This could be interesting as the base for a quick kanban board for mid-sized personal projects.
That said: agree with others that sharing state between devices (either yours or others), and being able to collaborate on the same board, is sort of the canonical feature requirement of kanban boards. They can be used for 1-person projects, goal tracking, etc. - I've used e.g. Notion boards in this way - but they gain most of their value from allowing multiple people to share awareness of task status and ownership.
Plus the use of localStorage means I'd eventually blow away my board state by accident - which is kind of a showstopper IMHO; being able to trust your tools is important.
Still: nice to see people experimenting with what you can do just using web basics :)
So I went and checked my suggestion on colors. You pulled it, but then never actually implemented it (unless I don't know how to toggle it). Are you going to add colors to the current version or should I go back and retro merge the color code I have?
If I remember correctly, Gmail stayed in beta for some (~4+) years.
Looking at the repository, maybe the author wants to work on mobile support, think about alternative storage methods, as for the last commit, maybe he's working on something else of simply living his life.
Pretty sure Gmail never stayer 4y without commits though. Probably not even 1 year. Not disagreeing with the notion itself, but noting, that the comparison doesn't really fit.
Fortuitous - I've been looking for a simple Kanban board this week and all the popular ones are a bit heavy (`plane` uses 8 containers!) or insist on using `mysql`.
Going to take a look at integrating this later, along with nbagent [1]. I have a 'dark mode' open source project with a few collaborators, many tasks and no good way to currently organise them.
One change I think I'll need to make is prettify the export of .nbx (JSON) so that git diffs perform better. Hopefully nbagent can keep the data synced as the board is updated.
Really cool. I love that editing a note doesn't involve pop up modal windows, save buttons, focus selection and all the other crap that things like Notion and Trello end up relying on.
Nice to see, but hope he moves away from LocalStorage, as that is far from ideal. Chrome clears this when cookies are cleared; and many 3rd party tools suggest this as a way to 'optimize' your web experience. Often seen people file an issue for an application I made that used this.
I can't help but think the missing bit is portability of the data file. I wonder if simply allowing a a binary or even JSON representation to be copy pasted from the browser would work well enough.
I like these tools that are in one local html file. But this seems a bit unsupported. Last commit ist one year old. When you need some more but no jira then kanboard.org is what I can recommend
That's cool - I've been looking for a simple self-hosted kanban. A backend is a requirement for me to use it across devices, though. But, I love the direction.
nice!
how do i enter the bulleted items (aka. un-numbered) list items inside each note? is there a numbered/unnumbered list option? can i highlight lines inside a note and make them bulleted?
I think "single HTML file" sets up a certain expectation that a five-thousand-line long HTML file with ~3500 lines of embedded JS doesn't really live up to. I mean, hey, everything can be a single HTML file if you embed the bundle inline in your HTML!
Cool project, though - don't mean to take away anything from it.
I totally understand your take, but as a guy that spends most of his time on side projects working on single HTML files, I have a different perspective.
I find the totally self-contained nature of them very appealing because it travels well through space and time, and it's incredibly accessible, both online and offline.
My current side project is actually using a WebDAV server to host a wide variety of different single HTML file apps that you can carry around on a USB drive or host on the web. The main trick to these apps is the same trick that TiddlyWiki uses, which is to construct a file in such a way that it can create an updated copy of itself and save it back to the server.
I'm attracted to this approach because it's a way to use relatively modern technologies in a way that is independent from giant corporations that want to hoover up all my data, while also being easy to hack and modify to suit my needs on a day-to-day basis.
I'm authoring two, One that's a keyboard based mnemonic launcher for accessing various websites. It's basically a way to have your bookmarks outside of the browser and quickly accessible. The other is a tabbed markdown document that can render static copies of itself.
The two projects out in the wild that natively work with this approach are TiddlyWiki and FeatherWiki.
I see room for a lightweight version of a calendar, a world clock, and even a lightweight spreadsheet that could be useful. I also have an idea for something I call a link trap where you can rapidly just drop links in and then search and sort over them to quickly retrieve something you saw before that was interesting. Sort of like my bookmarks-outside-the-browser except more of a history-outside-the-browser.
I've been working on projects like this for some time and that particular project happened to make the Hacker News front page but was not the client files that I'm referring to here, but rather a server. It was a side project to play with a redbean, since I'm a fan of jart's work.
My primary saver is a Python script that I wrote called Notedeck, but I also sometimes use a Rust webdav server called dufs.
I haven't released either of my projects I'm working on that are the client files, otherwise I would have just linked them.
You should be able to use the Smart HTTP protocol with HTTP Basic Auth (https://git-scm.com/docs/http-protocol) to push a locally-created commit. Forgejo supports CORS here, but I don't know whether GitHub does.
That's not to say one couldn't still do what you're describing via other headers, I'm just saying "<input name=username><input name=password>" won't get it done
I think I'd call that "local-first" or perhaps "offline-first". I can't think of a reason why a local-first app would need to be crammed into a single HTML file, hehe. I do agree it's very cool, though!
I tried to allude to this in my response above, but the fundamental reason that they're all crammed into a single file is to make sharing really easy. This is what I meant when I said it travels well through space.
I definitely get the appeal. It’s analogous to the single file executable. One file to move around, no install process needed, just grab and run. It was the main reason I used to reach for Delphi back in the day for Windows utilities.
Yes, I have done a lot of analysis on the structure of tiddlywiki and featherwiki to figure out the right way to do this in a more repeatable way. And that's the basis that I'm using for the projects I'm working on. I'm planning to write a blog post on this soon because I've learned a lot. So stay tuned.
For the benefit of the Hacker News audience that are curious, let me take a stab here.
The general strategy is to include JavaScript in the HTML document that knows how to look at various nodes in the DOM and create a new version of the document that uses an updated set of data.
Some sections of the data can be pulled verbatim. So, for example, if you have one giant script at the bottom of the doc, you can give it an ID of perhaps S, and then use that ID to retrieve the outer HTML of that script tag and insert it into the clone.
Other areas of the DOM need to be templated. So for example, I insert a script that is of type JSON, and that contains all of the data for the application. This can be pulled and stringified when creating the clone.
For a minority of attributes like the title and document settings like whether or not you're in light or dark mode, you want to avoid a flash of unstyled content and so you actually templatize those elements so they are written into the cloned copy with the updated data directly inline.
There's no real magic to it, but it can be a little bit tedious. One really interesting gotcha is that the script tag is parsed with higher precedence than quote tags. And so I have to break up all script tags that appear in any string that I'm manipulating so that the parser does not close out the script I'm writing.
I have a very minimal app that uses this technique called Dextral. I'll be happy to host it on my site and link it here.
Out of curiosity why WebDAV vs loading the HTML straight from disk wherever they are and having a service that's only needed to for saving changes?
Edit: I sketched out a basic version in JS + Python, it's fairly ok to use. The nice parts about this approach is the HTML files are viewable without any extra services, and the service that enables saving doesn't itself need configuration or to keep state.
The downside is even though you know the current directory due to window.location the API is written in the way it assumes you either want a default location like "desktop" or need the user to navigate to the directory before you can do operations on it (for security reasons) even from a local context. The user needs to select the directory once per fresh page load (if you've dynamically reloaded the current content then multiple saves need only prompt once so long as you save the handle).
"Single HTML file" sets up an expectation to me that it is (1) browser based and (2) client-side only. In other words, you can just open the file and start going without setting up a server, setting up a database, installing anything, or even having internet access. The last is not technically required but I think it is implied. It does not imply anything about the length of the file or the presence of client-side scripting.
What about "single HTML file" sets up an expectation about size? I'm genuinely confused. They seem like nearly unrelated concepts—this isn't a project about trying to fit a kanban board into a kilobyte or anything like that.
I think there are reasonable expectations around code quality that most projects adhere to, e.g.:
- Split JS out from HTML, split CSS out from HTML
- Keep files reasonably small
So if I read "Single HTML file" I'd expect around a couple hundred lines at most, possibly with some embedded CSS.
It's kind of like saying "I've solved your problem in one line of JS" but then your line of JS is 1000 characters long and is actually 50 statements separated by semicolons. Yes, technically you're not lying, but I was expecting when you said "one line of JS" that it would be roughly the size and shape of a typical line of JS found in the wild.
IMO those expectations are not reasonable at all given the description of the software; they sound more like anti-goals.
When I see “single HTML file” it conjures up the same expectations as when PocketBase[0] describes itself as an “Open Source backend in 1 file”.
That is that I can copy that file (and nothing else) somewhere and open/run it, and the application will function correctly. No internet connection, no external "assets" needed, and definitely no server.
This mode of distribution, along with offline and local-first software, avoiding susbscriptions and third party / cloud dependencies, etc. all appeal to me very much.
So far I'm impressed, I appreciate the nice, dense and uncluttered UI out of the box and it seems to cover enough functionality to be useful. I'll definitely look out for a chance to give it a spin on something real.
> along with offline and local-first software, avoiding susbscriptions and third party / cloud dependencies
Sorry if it's a bit direct and unrelated. I've actually got a question if you wouldn't mind.
I've been in the process of creating a local-first, non-subscription based Linux/Windows application that acts as a completely local search engine for your own documents and files. It's completely offline, utlises open source LLMs and you just use it like a search, so for example "what's my home insurance policy number", "how much did I pay for the humidifier", that kinda stuff. Things where you don't know exactly where the file is, but you know what you're looking for. You can browse through the results and go direct to the document if you want, or wait for the LLM response (which can be disabled if you just want plain search) to sift through the sources and give you a verifiable answer (checks sources for matching words, links to the source directly for you to reference etc).
My question would be. If you wanted something like this, how much would you pay? I'm going with the complete ownership type model, you pay, you get the exe and that's that. You like the new features in next major release, you pay an upgrade fee or something like that, but it's a "one and done" type affair.
$10, $30, $60, $100? I want it to be accessible to many people that don't want to feed their data to big corps but I also want to be able to make a living off improving it and bringing these features to people.
I've not really worked out any of this monetary side of things, or if it's something people even want. It's something I've developed over time for myself which I think could potentially be useful to other people.
I try to avoid proprietary software when I can, so if I wanted something like this I’d definitely look for open source options first. I’ll endure a reasonable amount of setup pain as long as the solution is what I’m after to go the open source route over a proprietary app.
For example, your idea seems to sit somewhere between Alfred (which I’ve bought every upgrade/ultimate pack/whatever for) or Raycast, and an LLM augmented search of a NAS / “personal cloud” server. So assuming I wanted it, if there was no neat and self contained open source solution, I’d probably try to cobble something together with Alfred, Ollama, Open WebUI, etc. (all of which I already run) plus some scripts/code first, too.
That said, for a good, feature-full local/self hosted solution that does exactly what I want in the absence of an open source option (or if it’s just much better in significant ways), I’m generally willing pay between $20–$100 per major release (for ex. I pay around that for e.g.: Alfred, the Affinity apps). For this I suppose $30–50 if it was pretty slick and filled an important niche. (I have paid more a handful of times in my life, usually for very specific use cases or if it helps me professionally, but not very recently.)
However, if a nice (or very promising and exciting to me), well maintained open source (GPL/MIT/Apache/BSD type license) solution does [most of] what I want and it’s something I really use (and a smaller project[0]) then I donate $10–30 per month (ex.: Helix, WezTerm). I sometimes do smaller or one-off donations, etc. for stuff I use less but appreciate.
That is, I intentionally pay more for open source I care about, and would humbly suggest considering that option for your project. Though I recognise that sustaining yourself financially that way is more than likely considerably harder, even with my small personal attempt at creating incentives for the world I want to see :)
NB: I do not buy subscription software unless it comes with a genuinely value added service (storing a few MiB on the devs cloud service doesn’t count) or access to data, for instance detailed snow/weather forecasts, market data, an advanced AI model (though my admittedly relatively minimal LLM use is currently >95% local and free/“open source” models).
[0] I don’t give money to the Linux kernel devs, etc. as I don’t think it’s likely to has as much positive impact
Thank you so much for taking the time to write such a detailed reply, it's given me a list of things to think about and it's honestly really appreciated. Completely understand the open source side of things and avoiding proprietary tech, I'm exactly the same and only standing on the shoulders of other open source software. I'm utilising SQLite and FAISS, so just files on disk that technically _any_ frontend or whatever could display, it's your data to do what you want with.
Not heard of Alfred as I'm not in the Apple ecosystem, but yes, you've hit the nail on the head between the combination of both after doing a bit of digging.
I'll seriously think about making it open source (time to brush up on the different licenses again). I want to keep it accessible so even my grandma could use it. I'm not expecting her to go cloning a git repo and checking dependencies etc, so I'm packaging it into a standalone executable. Maybe making the source open is something for me to consider and people can just pay if they don't want to go through any setup hassle (do I put some soft donation paywall up with a $0 minimum or something - just thinking out loud).
In terms of pricing, you've landed where I was thinking, maybe more towards the $30 end. I mean I think it's pretty slick and fills a niche, but I'm conscious I may be ever so slightly biased. A lot of stuff to mull over. Thanks again, really useful.
Thanks for taking the time to think about these things upfront rather than just cynically chasing money like 99% of paid/monetised apps out there.
It will greatly increase the attractiveness of your software to me if you stick to the philosophy you’ve outlined there.
One approach that I’ve seen and have absolutely no issues with (in fact I think it’s a pretty smart way of doing things) is where a fully open source project provides code and releases on GitHub and in package managers like Homebrew, but also publishes releases as paid software on app stores.
This allows users to pay for the peace of mind of a “verified” app store install that Just Works. It also provides an easy way for the more technical among us to donate. I’ve switched to paid releases like this myself for a least a couple of fully open source projects just to give a little back.
You see "single HTML file" and think "split JS out, split CSS out"? Did you not see that it was single file?? The title of this wasn't "hundred file React app".
Why mention it when any project can technically be turned into a single html file? In my opinion there is an expectation of simplicity and as a result a small size with that statement.
The simplicity isn't in editing it, which most users never will do. It is in that you can drop a single file anywhere and not worry about its dependencies.
Any project can be a single HTML file, but most projects prefer not to.
Most projects prefer to have a separate database, server side rendering and often even multiple layers of compilers too.
A lot of projects even require hundreds of megabytes of language runtime in addition to the browser stack.
So a single HTML file is still unusual even if it’s something nearly any web app could technically do if they wished.
And for this reason alone, I think it’s unreasonable to have expectations of a JavaScript-less, CSS-less code-golfed HTML file. This isn’t sold as a product of the demo scene (that’s another expectation entirely). This is sold as a practical self-hosting alternative for people who need a kanban quickly and painlessly. Your comment even proves that it works exactly as that kind of solution. So having inlined JS is a feature I’d expect from this rather than complain about.
Single HTML file conjures up memories of certain Windows software of the 90s where it's a single .exe, installer-free program. It could be copied onto a floppy and run on my school's computer.
We are definitely coming at this from a different angle.
If I can actually hit "ctrl-s" and save it offline, that's a huge and worthwhile feature that meaningfully changes how I can interact with the project; it's not purely a fluff description.
That will probably never work easily in any secure browser. I think at the very least you would have to go: ctrl+s, dialog opens hopefully at the right directory you want to store in, enter/return.
The message you're replying to did not mean that "ctrl-s" is literally the only input, but implied the whole file saving workflow that begins with "ctrl-s". The fact that you're pointing out that the other interpretation is ridiculous is enough of a context clue to realize they're not likely to be saying that.
The point is that you can save the file and then use it offline. The exact set of clicks to make that happen are completely irrelevant to the core point.
So when you save a document in any popular word processing program, and every time you save it, a dialog would pop up asking you where to save it ... You wouldn't find that annoying at all, do I understand that correctly?
Your line of questioning is based on an incorrect set of assumptions.
The number of clicks is an implementation detail. It depends on whether or not you're using the web file API, some browser download capability, a browser plug-in, a mobile app, desktop app, a webdav server, or something else.
For people trying it for the first time, they often have the experience you're describing. But for most anybody that actually picks this up and uses it on a day-to-day basis, they use something else that saves transparently and automatically.
All of this is orthogonal to whether or not it's in a single HTML file. I fear you took lelandbatey's original ctrl-s reference a bit more literally than intended, though if you want to be pedantic, I can confirm I use applications in this style all day as part of my daily workflow and I do press ctrl-s and it saves with no further interaction in fully patched versions of Chrome, Firefox, and Safari with no plugins whatsoever.
Would you be so kind to share how you manage to get them to save without further interaction? That would really add usability. Maybe I did take the ctrl+s too literally, but being pedantic was not my intention. Merely hinting at this usability issue. So a solution would be very welcome.
(And I am not the one downvoting you, in fact I couldn't even.)
As an aside: I do find these applications very interesting and am considering to make use of Nullboard myself, but also am weighing it against simply using org mode in Emacs and am looking for any advantage it might offer. Of course the ctrl+s issue plays a role there as well.
I disagree - there is value in single file versus multiple file, even if the LoC are exactly the same.
It’s one reason Mac Apps get bundled as a single “file” from the user perspective. You don’t have to “install”, you just copy one file with everything. It’s a simpler dev experience.
Sure there are tradeoffs, but that’s great! We should accept that tradeoffs mean people can chose what works best for their specific context, rather than “best practices” which are silly.
> I mean, hey, everything can be a single HTML file if you embed the bundle inline in your HTML!
Yes, they could be. And then they would have the same super power is this file: you can put it on a flash drive and run it anywhere with no setup or installation.
Lol, I was just disappointed because I was hoping to see a kanban board fully implemented in CSS/HTML. I do also dislike when nitpicky comments like this get upvoted.
This is very cool. BTW, when developing single HTML file apps, instead of localStorage, one can use the HTML as the source of truth, so the user can just save/save-as to persist. I had mentioned my quick and dirty attempt at an image gallery that is a self-contained html file and some really liked the concept, if not the "app" itself: https://news.ycombinator.com/item?id=41877482
TiddlyWiki famously does this! It's a great system for what it needs to be. https://tiddlywiki.com/
Does using save as retain changes you make in the HTML (like aedits on a contenteditable element)? In my experience it does not. When you save, the originally loaded html will be saved, any changes in html stay in your browser.
My bad, I should have mentioned that you should choose the option to save the complete web page. It becomes default on Firefox once you select it. Not sure about Chrome and others.
It looks like this on Firefox, Windows: https://imgur.com/zNlEGgK
Great idea, as usual killed in its infancy by lacking UA UX. We even already have an API to ask the user not to close the web page, because there's unsaved work; but it wouldn't work with "Save as...", because you can't detect that, and the browser won't do it for you, even if it knows you're editing a local file.
Same for all of the pointless cookie banners - they could've been UA prompts instead, putting the user in charge of setting a policy ("always trust example.com", "never trust example.net", "accept example.org for this session", etc). But building such prompts into a browser would've been a nuisance in 1997... So we ended up with that nuisance anyway, just enshrined by shortsighted laws, that target each and every website - rather than the three remaining browser engine vendors.
It's not killed IMHO, I actively use a fleet of such single HTML file apps that I save locally and even share with people. My initial comment has an example of a gallery, works wonderfully.
Well my issue is you forget to hit Ctrl-S/Cmd-S and your changes are gone. I've been using computers since early 90s and haven't found one program that does that; heck many apps don't even have an explicit "save" button, saving (and syncing) is transparent.
The web "browser" wasn't "intended" for this use case, hence the issue. This could be easily fixed though -- just like cookies.
You can also use localstorage/indexeddb and keep syncing it with a script element of type text/json. On-load, see if localstorage is gone (different browser, domain, etc.) and restore from the script element if that's the case.
I explicitly do not want such a thing in many of my HTML-apps, but one could add it with relative ease.
What is UA UX?
UA = User Agent (in this case, web browser). UX = user experience (catch-all term for "design", "how it works", integration, etc).
Was not hip to the UA part so couldn’t directly connect with the UX part, thanks
Wait, so every time I make a change I need to remember to save or it's all lost? Or am I missing something?
Not so different from Word/Excel/Lotus 1-2-3 from the 90s! I like that most software saves-as-I-type now, but that was the workflow back then. Ctrl+S was the most used key combo on my keyboard back then. Now, it's `:w<Enter>`.
My god this comment made me feel old.
God forbid you have to remember to save your work!
The mental model my kids have for work is that typing or even thinking is itself a finished product. For my generation that idea of a conscious action of saving your work on a computer made me think more about what I was doing and how I was doing it. But I am an old.
If you go back in time, there were editors that by default would save the previous version of a file to .bak, each time you saved the current status. The fear of accidental saving or editing and hence overwriting old good stuff was higher than accidentally loosing new good stuff. There was less protection of system files, config files etc, so chances were you would brick apps or even the OS. Things got more forgiving since.
I understand the overall contrast you're sketching here. But can you elaborate on
> typing or even thinking is itself a finished product
Any specific examples where you notice the difference?
Do you really long for that?
It's been over 20 year with auto-save being pretty common, one has to adapt to the modern times, especially when it makes things better.
I don't have to "remember to save my work" when I write on my notepad, why should it be different on a computer?
> Do you really long for that?
Yes, absoultely. Saving data you don't want saved and overwriting data you want to retain are just as bad as not saving data you want to keep.
Keeping a scratch file to restore from unexpected applications exits (crash, power loss, etc.) is fine but beyond that I expect to be in control of when and where things are saved.
> one has to adapt to the modern times
I expect my computers to adapt to my requirements, not the other way around.
> especially when it makes things better.
Modern rarely equals better.
Neither save strategy is 'the better one'. Next step would be that git auto-commits...
Wait until you discover pen and paper.
At my first job, one of my responsibilities was to write the product manuals. My boss would set a timer next to me and instruct me to ctrl-S every time it went off.
Corrective action from having lost work too many times :-)
I mean - yeah, honestly, god forbid. Requiring manual saves with limited change history (or none at all) was the bad old days. That was bad UI/UX, literally everybody had a “oops I forgot to save” and a “oops I saved and I didn’t mean to” horror story. Things are better now.
I wouldn't say it was totally awful. At least, prior to having an Undo option or perhaps an undo but that can only go back 3 steps, saving as before making any large changes was a pretty common workflow. I might end up with 50 versions of a document numbered incrementally by the time I was finished. that is still a necessary workflow for certain types of documents. I don't necessarily want everything saved automatically all the time.
I love my paper notebook to save everything automatically, I don't long for it to all disappear if I forget to press a button and give it a name.
You can get the browser to confirm you want to close the tab. That's what I did with my little single-file scrapbook thing: http://kaimac.org/posts/scrapbook/index.html
Yes, is that a no-go? Maybe I’m just being too old-school :)
It does make forks a lot easier, though!
Another advantage is that it makes it clear what you’re saving, reducing the likelihood of errors being persisted.
There is a pretty newish filesystem API around. You could probably make auto saving possible via that - at least after a prompt for the filesystem permission.
Or keeping it as it. That's fine too. It just came to mind
You could combine localStorage and manual save. Anything unsaved goes to localStorage, on save write to html and clear local storage. Best of both worlds?
If you construct the file to be able to save a copy of itself with updated data, then it can do so automatically without any user interaction, either on a timer or in response to some event.
I mean, that’s what I have to do in vim, so why not?
I like urls as source of truth. It’s beautifully deterministic
How would this work without manually editing the HTML file?
You write Javascript to read and write to HTML, like, develop a single page app that treats HTML as the view and the database, and restores state on DOMContentLoad from the HTML. Super easy and maintainable for micro apps.
You can also use the contenteditable attribute and use no JS, so you basically have a notepad.
Came here to comment this as well! TiddlyWiki uses this as it's default storage format.
This is mine.
FWIW here's a Show HN from 2019 - https://news.ycombinator.com/item?id=20077177
Do you know if a remote backup service was written? I am going through SimpleBackup code, (and if not exists) I would like to contribute remote agent for backup. Any pointers for that implementation would be appreciated.
Yes, I wrote one [1] and others did too, e.g. [2] and [3].
The gist of it, as mentioned in [4], is that you need to have a web server that implements checkStatus and saveConfig PUTs, and PUT and DELETE for saveBoard.
[1] https://github.com/apankrat/nullboard-agent
[2] https://github.com/luismedel/nbagent
[3] https://github.com/OfryL/nullboard-nodejs-agent
[4] https://nullboard.io/backups
Thanks for writing this. I found it useful and trivial to setup.
did you market it at all? or people just found it organically
I showed it around to people that would listen, but otherwise, no, I didn't market it per se.
This is very neat! I'd be interested to see something like this with a saving mechanism reminiscent of TiddlyWiki [0], which is saved as a portable HTML file. Documents that contain their own editors like this are really neat for offline use and long-term storage.
[0] https://tiddlywiki.com/#SavingMechanism
I tried this script (which Claude came up with) to save file with all it's changes. The steps are
1. Put this script in html file and add a save/download button to trigger it
2. Set `contenteditable` on you editable elements
That's it. Now make changes on the page and click download button to save the page with your changes. This should allow seeing your work without necessarily depending on JS.
The script:
No need for the script. Browsers have "save as file" functionality built in!
I dont think using save as keeps the changes you make in HTML using contenteditable.
Ah, you might be right about that. Good point!
I have implemented Google Drive and MS OneDrive support in my one-file video player app. I'm using it for reading, but I think it could be used to push back changes to a file for saving.
I guess that's something you could add into the remote backup server. I'm considering writing my own (in Go/Rust) and I might see if I can add that.
I really think we are as developers underusing setups like this. If you could somehow figure out a simple but effective sync between devices then that would be able to cover many use-cases.
Maybe we just need an sqlite with better support for replicas? Then people have one tiny server with a bunch of sqlite’s to which the apps can sync?
This sounds like a job for WebDAV!
https://en.wikipedia.org/wiki/WebDAV
One major failing of WebDAV for these use cases is that the spec requires you to `PUT` the whole resource/file in order to save changes. This isn't too bad when the individual files are small, but as your single file apps grow, this means a lot of data transfer without the comforts of differential uploads.
I'm very interested in this area and this is exactly the approach that I'm pursuing.
You're talking about Zero Data Apps[0]. :)
[0] https://0data.app/
I feel GitHub gists are a very good store for data: writable, versioned, shareable and read-only for non authors, workable, support for arbitrary data including binary
I tried that. The problem is that you will always need a middle-end (a separate server between your stand-alone front end application and the GitHub server). You can't just distribute a stand-alone app and let it talk to GitHub directly. Or well you can but only if you get a token with *FULL* scope for all repositories. What I as a developer need is a scope for only one repository. That's how stand-alone and especially browser-based applications can remain safe. You don't want to store tokens with full permissions in multiple apps.
SyncThing is my go-to for multi-device syncing.
CouchDB + PouchDB
They're the backend for Noteself, such I've been using for a few years now as my life organising note taking tool.
I have it setup self-hosted, and as long as I have internet I can connect to it and update it. Of I don't have internet I can browse the contents as per the most recently cached version.
(I can also save it to a single HTML file and back that up in numerous off site locations).
https://noteself.org/
dexieCloud too.
For something as simple as this a manual export/import would be the most appropriate. Probably not a strong foundation for a communication tool.
> manual export/import
That would mean manual busywork every time you start/end a session. If you ever forget one of those steps, your work becomes out of sync and that’s extra work to sort it out. Depending on when you notice, it may take you several hours to fix. Not having to do things manually is what computers are for.
You could build a sync feature that requires some background process on each client along with likely complex configuration for peer-to-peer sharing, or you could build a web backend that runs on a server. There are good reasons why everyone does the latter.
Firefox has Pocket. Maybe, it can be used for synchronization ("pocketing") between devices.
> If you could somehow figure out a simple but effective sync between devices
If only there were a technology one could use to “serve” information from a central storage mechanism, then update this “server” in a deterministic fashion, possibly with security as part of the feature set…
Pouchdb?
Syncthing
I love tools like this. I have my own single HTML file project for a HTTP video player along those lines. https://github.com/pseudosavant/player.html
I'll definitely be looking at the source code to see if there are any ideas I want to incorporate into my own single file tools.
This is fantastic, thanks for sharing!
I've been using something similar for a few years now hacked together from different sources, but yours is much more polished.
I love it. I use Trello as my 2nd brain but it means that I can't do anything offline. Where I can see loving this is if I write a little converter so I can take a Trello board JSON export, load it into Nullboard and work on it offline, then a thing that goes back the other way and creates a Trello board from my NBX file.
Maybe if I put the original trello card ID at the bottom of each NBX "note" and then synched any text back as a new comment on that card, and the list ID in the title of each list and adding any notes without a Trello card link as new cards to that list, it would be a pretty automated way to get a bunch of edits back into Trello where I could tidy up with copy/paste.
Rock on!! Forked the repo and have my new local version pinned.
Glad to see this on here. I've been using this for 3 years now with my club to track race course participants for a 50K. Radio operators at each station call in bib numbers, and we drag their bib from their last waypoint to the new one. There is a laptop in the van mirrored to a TV outside the van. Friends and family can then check on their particpant.
https://i.imgur.com/UZWhppc.png
I basically modified the CSS a bit so you can fit multiple cards in a row. nullboard as a board import/export feature to a simple json file. I have small python script that generates the columns (way-points) and cards (bib numbers). Then I can import that json to start that years race. While I would like more features (time tracking), it's a rather simple tool that can be easily operated offline and requires no resources other than a web browser.
I wish there was some browser solution for apps like this where you could save and share your app state between your own devices, and push/share that state with others, all without any server backend involvement e.g. so you could have a Kanban board shared with others and the state would be stored locally and/or in bring-your-own cloud storage.
There's so many apps like this that could be simple, but for robust state saving involve setting up and maintaining a backend (e.g. with security patches, backups, performance monitoring). There's also the privacy implications on your data being store on someone's server, and the risk of data leaks.
It's like there's a key part of the internet that's missing.
Something like this could be a browser extension? This exists?
`roamingStorage` as a relative to `localStorage` sort of like Windows' "Local App Data" versus "Roaming App Data' would be nice to have in theory.
Of course even if you kept it to the simple KV store interface like `localStorage` you'd need to define sync semantics and conflict resolution mechanics.
Then you'd have to solve all the security concerns of which pages get access to `roamingStorage` and how it determines "same app" and "same user" to avoid rogue apps exfiltrating data from other apps and users.
It would be neat to find an architecture to solve such things and see it added as a web standard.
> Of course even if you kept it to the simple KV store interface like `localStorage` you'd need to define sync semantics and conflict resolution mechanics.
It would be great to have this standardized. Custom two way syncing is a nightmare to implement correctly, and it doesn't make sense for apps to have to keep reinventing the wheel here.
It is a factor in why CRDT/OT posts and libraries get regularly upvoted on HN. A lot of us want a good, easy to use baseline. A lot of us are searching for that and/or thinking we can build that.
Part of why there's always some reinventing the wheel in this space, unfortunately is that this also seems to be one of the harder problems to generalize. There's always going to be domain specifics to your models or your users or users' idea of your models that is going to need some custom sync and conflict resolution work. Sync has a lot more application specifics than most of us want.
That said, yeah, if there was a good simple building block "base line" to start with that met a nice 80/20 rule plateau, it would be great for giving apps an easy place to start and the tools to build application and domain specifics as they grow/learn their domain.
(One such 80/20 place to start if the idea was just a simple KV store might be a basic "Last Write Wins" approach and a simple API to read older versions when necessary. You can build a lot of the cool CRDT/OT stuff on top of a store that simple. Many starting apps LWW is generally a good place to start for the basics. It doesn't solve all the syncing/conflict resolution problems, you'll still need to write app-specific code at some point, but that could be a place to start. It's basically the place most of the simplest NoSQL databases started.)
> It is a factor in why CRDT/OT posts and libraries get regularly upvoted on HN. A lot of us want a good, easy to use baseline. A lot of us are searching for that and/or thinking we can build that.
Do you have any views on the easiest way to do two-way syncing for a web app if you want to avoid relying on propriety services, or a complex niche framework? This comes up for me every few years and I'm always disappointed there isn't a no-brainer way yet to get a 80% good enough solution.
For a couple years I thought PouchDB might give us a strong relatively standardized 80% solution, but then the CouchDB ecosystem kind of exploded (IBM bought Cloudant and pivoted it to a much worse IBM product; Couchbase continued to pivot away from CouchDB compatibility; Apache's politics got real confused real quickly about what the next version of CouchDB should even be and tried to rewrite it in six different languages).
I still think the multi-version document database approach is probably the easiest 80% solution baseline to work with for many projects, especially with a mixture of developer skill levels and document types. It doesn't save you from needing to solve conflicts, of course, but the default behavior (newest version wins) is a predictable one and generally just works until it doesn't work anymore by which point you should have a better idea of your application's specific domains and which conflicts are more important to resolve than others.
It's unfortunate that "Mongo-compatible" better won the marketplace, because CouchDB had a better focus on standardizing the sync experience between DBs and "Couch-compatible" was nice while it lasted (not nearly long enough).
I think the CRDT/OT library builders got a little bit too lost for too many years thinking that the "conflict-free" initial "C" in CRDT was fate or manifest destiny rather than hopes and dreams (to see dashed against the hard rocks of reality). As someone whose introduction to CRDT/OT was specifically from the lens of source control, I know that I was a skeptic that they'd ever truly achieve "conflict-free". Conflict resolution in source control will almost always be a human-in-the-loop touch point. I don't think you can solve for "conflict-free", I think you just try for more and more ways to sweep conflicts under the rug and I think that's a pretty good summary for part of why we've seen a lot of increasingly complex frameworks in the space that feel like they need increasingly more PhDs in the room to build apps on top of.
I think we're finally starting to see the other side of that mountain, as more (not all, but a lot more) of the developers working in that space do start to realize that you can't eliminate conflicts entirely in a general way for every application domain, you can just offer better tools for how to manage them when they naturally occur. The last few libraries that I've seen on HN have given me hope that we are starting to see more pragmatic and fewer dogmatic projects in that space and maybe soon one will feel like a real 80% good enough contender for general application development.
Amusingly, or perhaps predictively, I think the ones that are going to get there to feeling like an 80% good enough are going to be the ones that most look like a multi-version JSON document database at the high-level API and maybe also in the low level storage mechanics (to run anywhere localStorage does and/or any Mongo-compatible NoSQL store). In theory though, maybe that CRDT/OT-based version gives you a lot of knobs under the hood in between storage and high level specifics to help push individual applications from 80% to 90%+ if you learn how to configure it. That would be better than just an 80% if that happened, and is a reason to stay excited about the CRDT/OT space.
I think the other side of the wishlist is also hoping for better peer-to-peer/device-to-device solutions for the web and web browsers. There was a flurry of work that happened there during the cryptocoin boom that never quite built the 80% good enough solutions for general applications or that weren't too heavily tied to cryptocoin projects that added proprietary complexity. I'm afraid that as the cryptocoin boom has faded a lot of those projects are already abandoned and we are going to see another spike in interest in such tools again for a while. (Unfortunately AI doesn't need peer-to-peer, it needs big pipes to large central datacenters again.)
You can already do this using things like the Chrome Storage APIs (obviously chrome only, and you need to be signed in, and bundle an extension)
https://developer.chrome.com/docs/extensions/reference/api/s...
Yeah, seen this but the storage quota is tiny:
> If syncing is enabled, the data is synced to any Chrome browser that the user is logged into. If disabled, it behaves like storage.local. Chrome stores the data locally when the browser is offline and resumes syncing when it's back online. The quota limitation is approximately 100 KB, 8 KB per item.
8KB is a lot! You just have to be mindful of the constraint. I made an entire video game in 39 KB.
As of writing this there are 113 previous comments, none of which mention "metrics," "Scrum," or "velocity."
I hope any Scrum Master who comes across this thread takes the absence of those terms as an indicator of the value of those things and all the related feature bloat that bedevils Agile project management tools.
I run Planka in an LXC container on Proxmox, but this looks useful (despite being 'beta') for anyone who just wants an absolutely no frills local-first GUI for simple task management.
The README mentions that "Trello wasn't bad", but storing this type of data in the cloud wasn't desirable. Well, Planka is the answer to that.
Neat, very very well done.
One observation: Kanban is all about limiting the work in progress. That’s really its foundation. WIP limit is the main means for controlling and improving overall workflow effectiveness.
I would argue that boards not offering a WIP limit are not really “Kanban” boards, as they defeat the very goal of Kanban.
WIP limits are not important to me.
I use my Trello boards for mental hygiene.
Something comes in, I put it on the board to get it out of my head. Then, when appropriate, I go to the board, rearrange the top 5 or so items by priority, then start on the top item.
Many things never get done, but they have turned out to lower priority, by definition.
If I am waiting for something to happen on a current task, I put it second or third in the queue.
Keeps me sane....
Neither to me. But it’s a matter in names: if it does not handle WIP limit it’s not a “Kanban Board”. It’s a Board, and that’s perfectly fine.
Cool! This could be interesting as the base for a quick kanban board for mid-sized personal projects.
That said: agree with others that sharing state between devices (either yours or others), and being able to collaborate on the same board, is sort of the canonical feature requirement of kanban boards. They can be used for 1-person projects, goal tracking, etc. - I've used e.g. Notion boards in this way - but they gain most of their value from allowing multiple people to share awareness of task status and ownership.
Plus the use of localStorage means I'd eventually blow away my board state by accident - which is kind of a showstopper IMHO; being able to trust your tools is important.
Still: nice to see people experimenting with what you can do just using web basics :)
I have this and like it alot. I've used it since a 2020 post on hacker news. It takes some effort, but once you get past the learning curve.
I'm very color oriented, so my forked version does colors to help me organized. (Yes I've sent it in as a pull request).
From the readme:
> Still very much in beta.
The last commit was November 2023.
Well, I didn't have a need to add or to change anything since then.
Here's the full timeline to get a general sense of the development pace - http://nullboard.io/changes
Just wanted to say that I really like the style/design of your change log timeline. :)
So I went and checked my suggestion on colors. You pulled it, but then never actually implemented it (unless I don't know how to toggle it). Are you going to add colors to the current version or should I go back and retro merge the color code I have?
Fair enough, but I wanted to be sure everyone was aware of the situation.
Then it’s probably read to promote to RC or full release
If I remember correctly, Gmail stayed in beta for some (~4+) years.
Looking at the repository, maybe the author wants to work on mobile support, think about alternative storage methods, as for the last commit, maybe he's working on something else of simply living his life.
Pretty sure Gmail never stayer 4y without commits though. Probably not even 1 year. Not disagreeing with the notion itself, but noting, that the comparison doesn't really fit.
Fortuitous - I've been looking for a simple Kanban board this week and all the popular ones are a bit heavy (`plane` uses 8 containers!) or insist on using `mysql`.
You might like kanboard, it's a classic LAMP project but it can use sqlite for small installations.
I'll second that. I've used it a lot for my own small projects.
Super, ta, I'll have a look at that.
I use teable with sqlite - it has a kanban board view
Thanks a lot for this. I use it everday and it brings me a lot if joy. We need more tools like this one.
Going to take a look at integrating this later, along with nbagent [1]. I have a 'dark mode' open source project with a few collaborators, many tasks and no good way to currently organise them.
One change I think I'll need to make is prettify the export of .nbx (JSON) so that git diffs perform better. Hopefully nbagent can keep the data synced as the board is updated.
[1] https://github.com/luismedel/nbagent
Really cool. I love that editing a note doesn't involve pop up modal windows, save buttons, focus selection and all the other crap that things like Notion and Trello end up relying on.
You just edit the text. Perfect.
Is there a repo or list of single HTML files like this? I’ve seen people mention Tiddlywiki but that’s it so far.
Nice to see, but hope he moves away from LocalStorage, as that is far from ideal. Chrome clears this when cookies are cleared; and many 3rd party tools suggest this as a way to 'optimize' your web experience. Often seen people file an issue for an application I made that used this.
Thank you for sharing.
I remember working with Html applications (HTA) on Windows back with JScript or VBScript.
I am thinking of tools that would make navigating plain text long files useful such as simple table of contents generation or indexes.
I thought the primary value of Kanban was about collaboration.
What is the purpose of a 1-person kanban?
How do you collaborate with local storage?
Grocery and ToDo lists. That's Nullboard's primary purpose.
The classic software dilemma when I see something like this.
It is simple, nice and clean.
I immediatley want things knowing deep down that those things, if delivered, would probably take away the esscence of what's good about it.
Still...i would like alt ways to persist and share ...would be nice to manage 1:1s across multiple teams I run :-p
Given you can put scripts and css in an html file, you can produce basically any website/app and offer it as a single file.
Loaded locally and/or via the Web, are there any other file formats that work this way or is .html the only bootstrapping option browsers support?
This is very cool!
I can't help but think the missing bit is portability of the data file. I wonder if simply allowing a a binary or even JSON representation to be copy pasted from the browser would work well enough.
I like these tools that are in one local html file. But this seems a bit unsupported. Last commit ist one year old. When you need some more but no jira then kanboard.org is what I can recommend
maybe the author should update slightly the grayscale once a week to make the software not look unsupported?
I'm on mobile, I can't figure out how to add a card to a board?
Hamburger menu (≡) then +Note. Works for me on iOS
Try the hamburger buttons
That's cool - I've been looking for a simple self-hosted kanban. A backend is a requirement for me to use it across devices, though. But, I love the direction.
This is really cool and something I could use and edit. Responsive CSS for mobile and tablet feels needed.
nice! how do i enter the bulleted items (aka. un-numbered) list items inside each note? is there a numbered/unnumbered list option? can i highlight lines inside a note and make them bulleted?
This doesn’t work on iPhone Safari for dragging things
Good stuff!! JQuery may a bit outdated but very good :)
Looks nicer than say kanboard, I may give this a shot
I love when I find new SFAs (single file app)
Very cool project!
it's not a BSD license if it has a "common clause"
call it whatever you want, but don't ever mention a BSD License if you've modified it.
love it!
I think "single HTML file" sets up a certain expectation that a five-thousand-line long HTML file with ~3500 lines of embedded JS doesn't really live up to. I mean, hey, everything can be a single HTML file if you embed the bundle inline in your HTML!
Cool project, though - don't mean to take away anything from it.
I totally understand your take, but as a guy that spends most of his time on side projects working on single HTML files, I have a different perspective.
I find the totally self-contained nature of them very appealing because it travels well through space and time, and it's incredibly accessible, both online and offline.
My current side project is actually using a WebDAV server to host a wide variety of different single HTML file apps that you can carry around on a USB drive or host on the web. The main trick to these apps is the same trick that TiddlyWiki uses, which is to construct a file in such a way that it can create an updated copy of itself and save it back to the server.
I'm attracted to this approach because it's a way to use relatively modern technologies in a way that is independent from giant corporations that want to hoover up all my data, while also being easy to hack and modify to suit my needs on a day-to-day basis.
tiddlywiki brings back memories ... never had been so optimistic about computers!
Do have a link to all the single html file apps you have in mind?
I'm authoring two, One that's a keyboard based mnemonic launcher for accessing various websites. It's basically a way to have your bookmarks outside of the browser and quickly accessible. The other is a tabbed markdown document that can render static copies of itself.
The two projects out in the wild that natively work with this approach are TiddlyWiki and FeatherWiki.
I see room for a lightweight version of a calendar, a world clock, and even a lightweight spreadsheet that could be useful. I also have an idea for something I call a link trap where you can rapidly just drop links in and then search and sort over them to quickly retrieve something you saw before that was interesting. Sort of like my bookmarks-outside-the-browser except more of a history-outside-the-browser.
Is https://rpdillon.net/redbean-tiddlywiki-saver.html still what you're using?
I've been working on projects like this for some time and that particular project happened to make the Hacker News front page but was not the client files that I'm referring to here, but rather a server. It was a side project to play with a redbean, since I'm a fan of jart's work.
My primary saver is a Python script that I wrote called Notedeck, but I also sometimes use a Rust webdav server called dufs.
I haven't released either of my projects I'm working on that are the client files, otherwise I would have just linked them.
shameless plug: I also made a single HTML metronome in js
https://github.com/est/metronome
> construct a file in such a way that it can create an updated copy of itself and save it back to the server
Is there some way to accomplish this through GitHub? Like the single html file running on GitHub.io pages can commit the changes to its repo?
You should be able to use the Smart HTTP protocol with HTTP Basic Auth (https://git-scm.com/docs/http-protocol) to push a locally-created commit. Forgejo supports CORS here, but I don't know whether GitHub does.
Both GitHub and GitLab have killed http basic auth https://docs.github.com/en/rest/authentication/authenticatin... https://docs.gitlab.com/ee/topics/git/troubleshooting_git.ht...
That's not to say one couldn't still do what you're describing via other headers, I'm just saying "<input name=username><input name=password>" won't get it done
I think I'd call that "local-first" or perhaps "offline-first". I can't think of a reason why a local-first app would need to be crammed into a single HTML file, hehe. I do agree it's very cool, though!
I tried to allude to this in my response above, but the fundamental reason that they're all crammed into a single file is to make sharing really easy. This is what I meant when I said it travels well through space.
I definitely get the appeal. It’s analogous to the single file executable. One file to move around, no install process needed, just grab and run. It was the main reason I used to reach for Delphi back in the day for Windows utilities.
Same reason excel business apps are one file - easy to move around and share without dependencies going missing!
> The main trick to these apps [...] is to construct a file in such a way that it can create an updated copy of itself and save it back to the server.
Any additional info/pointers on this ?
Yes, I have done a lot of analysis on the structure of tiddlywiki and featherwiki to figure out the right way to do this in a more repeatable way. And that's the basis that I'm using for the projects I'm working on. I'm planning to write a blog post on this soon because I've learned a lot. So stay tuned.
For the benefit of the Hacker News audience that are curious, let me take a stab here.
The general strategy is to include JavaScript in the HTML document that knows how to look at various nodes in the DOM and create a new version of the document that uses an updated set of data.
Some sections of the data can be pulled verbatim. So, for example, if you have one giant script at the bottom of the doc, you can give it an ID of perhaps S, and then use that ID to retrieve the outer HTML of that script tag and insert it into the clone.
Other areas of the DOM need to be templated. So for example, I insert a script that is of type JSON, and that contains all of the data for the application. This can be pulled and stringified when creating the clone.
For a minority of attributes like the title and document settings like whether or not you're in light or dark mode, you want to avoid a flash of unstyled content and so you actually templatize those elements so they are written into the cloned copy with the updated data directly inline.
There's no real magic to it, but it can be a little bit tedious. One really interesting gotcha is that the script tag is parsed with higher precedence than quote tags. And so I have to break up all script tags that appear in any string that I'm manipulating so that the parser does not close out the script I'm writing.
I have a very minimal app that uses this technique called Dextral. I'll be happy to host it on my site and link it here.
Out of curiosity why WebDAV vs loading the HTML straight from disk wherever they are and having a service that's only needed to for saving changes?
Edit: I sketched out a basic version in JS + Python, it's fairly ok to use. The nice parts about this approach is the HTML files are viewable without any extra services, and the service that enables saving doesn't itself need configuration or to keep state.
You can save without any services at all using the File System API https://developer.mozilla.org/en-US/docs/Web/API/File_System...
The downside is even though you know the current directory due to window.location the API is written in the way it assumes you either want a default location like "desktop" or need the user to navigate to the directory before you can do operations on it (for security reasons) even from a local context. The user needs to select the directory once per fresh page load (if you've dynamically reloaded the current content then multiple saves need only prompt once so long as you save the handle).
This rational appeals to me strongly, especially on a primal level
But also that’s a lot of code in general
"Single HTML file" sets up an expectation to me that it is (1) browser based and (2) client-side only. In other words, you can just open the file and start going without setting up a server, setting up a database, installing anything, or even having internet access. The last is not technically required but I think it is implied. It does not imply anything about the length of the file or the presence of client-side scripting.
What about "single HTML file" sets up an expectation about size? I'm genuinely confused. They seem like nearly unrelated concepts—this isn't a project about trying to fit a kanban board into a kilobyte or anything like that.
I think there are reasonable expectations around code quality that most projects adhere to, e.g.:
- Split JS out from HTML, split CSS out from HTML
- Keep files reasonably small
So if I read "Single HTML file" I'd expect around a couple hundred lines at most, possibly with some embedded CSS.
It's kind of like saying "I've solved your problem in one line of JS" but then your line of JS is 1000 characters long and is actually 50 statements separated by semicolons. Yes, technically you're not lying, but I was expecting when you said "one line of JS" that it would be roughly the size and shape of a typical line of JS found in the wild.
IMO those expectations are not reasonable at all given the description of the software; they sound more like anti-goals.
When I see “single HTML file” it conjures up the same expectations as when PocketBase[0] describes itself as an “Open Source backend in 1 file”.
That is that I can copy that file (and nothing else) somewhere and open/run it, and the application will function correctly. No internet connection, no external "assets" needed, and definitely no server.
This mode of distribution, along with offline and local-first software, avoiding susbscriptions and third party / cloud dependencies, etc. all appeal to me very much.
So far I'm impressed, I appreciate the nice, dense and uncluttered UI out of the box and it seems to cover enough functionality to be useful. I'll definitely look out for a chance to give it a spin on something real.
[0] which I also think is great
> along with offline and local-first software, avoiding susbscriptions and third party / cloud dependencies
Sorry if it's a bit direct and unrelated. I've actually got a question if you wouldn't mind.
I've been in the process of creating a local-first, non-subscription based Linux/Windows application that acts as a completely local search engine for your own documents and files. It's completely offline, utlises open source LLMs and you just use it like a search, so for example "what's my home insurance policy number", "how much did I pay for the humidifier", that kinda stuff. Things where you don't know exactly where the file is, but you know what you're looking for. You can browse through the results and go direct to the document if you want, or wait for the LLM response (which can be disabled if you just want plain search) to sift through the sources and give you a verifiable answer (checks sources for matching words, links to the source directly for you to reference etc).
My question would be. If you wanted something like this, how much would you pay? I'm going with the complete ownership type model, you pay, you get the exe and that's that. You like the new features in next major release, you pay an upgrade fee or something like that, but it's a "one and done" type affair.
$10, $30, $60, $100? I want it to be accessible to many people that don't want to feed their data to big corps but I also want to be able to make a living off improving it and bringing these features to people.
I've not really worked out any of this monetary side of things, or if it's something people even want. It's something I've developed over time for myself which I think could potentially be useful to other people.
Good question.
I try to avoid proprietary software when I can, so if I wanted something like this I’d definitely look for open source options first. I’ll endure a reasonable amount of setup pain as long as the solution is what I’m after to go the open source route over a proprietary app.
For example, your idea seems to sit somewhere between Alfred (which I’ve bought every upgrade/ultimate pack/whatever for) or Raycast, and an LLM augmented search of a NAS / “personal cloud” server. So assuming I wanted it, if there was no neat and self contained open source solution, I’d probably try to cobble something together with Alfred, Ollama, Open WebUI, etc. (all of which I already run) plus some scripts/code first, too.
That said, for a good, feature-full local/self hosted solution that does exactly what I want in the absence of an open source option (or if it’s just much better in significant ways), I’m generally willing pay between $20–$100 per major release (for ex. I pay around that for e.g.: Alfred, the Affinity apps). For this I suppose $30–50 if it was pretty slick and filled an important niche. (I have paid more a handful of times in my life, usually for very specific use cases or if it helps me professionally, but not very recently.)
However, if a nice (or very promising and exciting to me), well maintained open source (GPL/MIT/Apache/BSD type license) solution does [most of] what I want and it’s something I really use (and a smaller project[0]) then I donate $10–30 per month (ex.: Helix, WezTerm). I sometimes do smaller or one-off donations, etc. for stuff I use less but appreciate.
That is, I intentionally pay more for open source I care about, and would humbly suggest considering that option for your project. Though I recognise that sustaining yourself financially that way is more than likely considerably harder, even with my small personal attempt at creating incentives for the world I want to see :)
NB: I do not buy subscription software unless it comes with a genuinely value added service (storing a few MiB on the devs cloud service doesn’t count) or access to data, for instance detailed snow/weather forecasts, market data, an advanced AI model (though my admittedly relatively minimal LLM use is currently >95% local and free/“open source” models).
[0] I don’t give money to the Linux kernel devs, etc. as I don’t think it’s likely to has as much positive impact
Thank you so much for taking the time to write such a detailed reply, it's given me a list of things to think about and it's honestly really appreciated. Completely understand the open source side of things and avoiding proprietary tech, I'm exactly the same and only standing on the shoulders of other open source software. I'm utilising SQLite and FAISS, so just files on disk that technically _any_ frontend or whatever could display, it's your data to do what you want with.
Not heard of Alfred as I'm not in the Apple ecosystem, but yes, you've hit the nail on the head between the combination of both after doing a bit of digging.
I'll seriously think about making it open source (time to brush up on the different licenses again). I want to keep it accessible so even my grandma could use it. I'm not expecting her to go cloning a git repo and checking dependencies etc, so I'm packaging it into a standalone executable. Maybe making the source open is something for me to consider and people can just pay if they don't want to go through any setup hassle (do I put some soft donation paywall up with a $0 minimum or something - just thinking out loud).
In terms of pricing, you've landed where I was thinking, maybe more towards the $30 end. I mean I think it's pretty slick and fills a niche, but I'm conscious I may be ever so slightly biased. A lot of stuff to mull over. Thanks again, really useful.
Thanks for taking the time to think about these things upfront rather than just cynically chasing money like 99% of paid/monetised apps out there.
It will greatly increase the attractiveness of your software to me if you stick to the philosophy you’ve outlined there.
One approach that I’ve seen and have absolutely no issues with (in fact I think it’s a pretty smart way of doing things) is where a fully open source project provides code and releases on GitHub and in package managers like Homebrew, but also publishes releases as paid software on app stores.
This allows users to pay for the peace of mind of a “verified” app store install that Just Works. It also provides an easy way for the more technical among us to donate. I’ve switched to paid releases like this myself for a least a couple of fully open source projects just to give a little back.
You see "single HTML file" and think "split JS out, split CSS out"? Did you not see that it was single file?? The title of this wasn't "hundred file React app".
Why mention it when any project can technically be turned into a single html file? In my opinion there is an expectation of simplicity and as a result a small size with that statement.
Turning a large project into a single HTML file results in a mess.
Very different than a project that was made to be a single HTML file from the start.
The simplicity isn't in editing it, which most users never will do. It is in that you can drop a single file anywhere and not worry about its dependencies.
Any project can be a single HTML file, but most projects prefer not to.
Most projects prefer to have a separate database, server side rendering and often even multiple layers of compilers too.
A lot of projects even require hundreds of megabytes of language runtime in addition to the browser stack.
So a single HTML file is still unusual even if it’s something nearly any web app could technically do if they wished.
And for this reason alone, I think it’s unreasonable to have expectations of a JavaScript-less, CSS-less code-golfed HTML file. This isn’t sold as a product of the demo scene (that’s another expectation entirely). This is sold as a practical self-hosting alternative for people who need a kanban quickly and painlessly. Your comment even proves that it works exactly as that kind of solution. So having inlined JS is a feature I’d expect from this rather than complain about.
> a five-thousand-line long HTML file with ~3500 lines of embedded JS
You made me look at the code and I was afraid of what I was going to find.
But man, that code is pretty and well organized, just like the resulting page.
Single HTML file conjures up memories of certain Windows software of the 90s where it's a single .exe, installer-free program. It could be copied onto a floppy and run on my school's computer.
We are definitely coming at this from a different angle.
It sets up precisely the expectation that everything is inline, surely? How else could it be fully implemented in a single file?
If I can actually hit "ctrl-s" and save it offline, that's a huge and worthwhile feature that meaningfully changes how I can interact with the project; it's not purely a fluff description.
That will probably never work easily in any secure browser. I think at the very least you would have to go: ctrl+s, dialog opens hopefully at the right directory you want to store in, enter/return.
The message you're replying to did not mean that "ctrl-s" is literally the only input, but implied the whole file saving workflow that begins with "ctrl-s". The fact that you're pointing out that the other interpretation is ridiculous is enough of a context clue to realize they're not likely to be saying that.
Are you serious?
What's wrong with that statement? Can you access the filesystem without a dialog?
It's pure useless pedantry. Imagine the abysmal signal to noise ratio if discussions went like that more often.
It’s not pedantry at all, he’s highlighting a major usability issue
The point is that you can save the file and then use it offline. The exact set of clicks to make that happen are completely irrelevant to the core point.
So when you save a document in any popular word processing program, and every time you save it, a dialog would pop up asking you where to save it ... You wouldn't find that annoying at all, do I understand that correctly?
Your line of questioning is based on an incorrect set of assumptions.
The number of clicks is an implementation detail. It depends on whether or not you're using the web file API, some browser download capability, a browser plug-in, a mobile app, desktop app, a webdav server, or something else.
For people trying it for the first time, they often have the experience you're describing. But for most anybody that actually picks this up and uses it on a day-to-day basis, they use something else that saves transparently and automatically.
All of this is orthogonal to whether or not it's in a single HTML file. I fear you took lelandbatey's original ctrl-s reference a bit more literally than intended, though if you want to be pedantic, I can confirm I use applications in this style all day as part of my daily workflow and I do press ctrl-s and it saves with no further interaction in fully patched versions of Chrome, Firefox, and Safari with no plugins whatsoever.
Would you be so kind to share how you manage to get them to save without further interaction? That would really add usability. Maybe I did take the ctrl+s too literally, but being pedantic was not my intention. Merely hinting at this usability issue. So a solution would be very welcome.
(And I am not the one downvoting you, in fact I couldn't even.)
As an aside: I do find these applications very interesting and am considering to make use of Nullboard myself, but also am weighing it against simply using org mode in Emacs and am looking for any advantage it might offer. Of course the ctrl+s issue plays a role there as well.
I think via something like this:
https://developer.chrome.com/docs/capabilities/web-apis/file...
I disagree - there is value in single file versus multiple file, even if the LoC are exactly the same.
It’s one reason Mac Apps get bundled as a single “file” from the user perspective. You don’t have to “install”, you just copy one file with everything. It’s a simpler dev experience.
Sure there are tradeoffs, but that’s great! We should accept that tradeoffs mean people can chose what works best for their specific context, rather than “best practices” which are silly.
> I mean, hey, everything can be a single HTML file if you embed the bundle inline in your HTML!
Yes, they could be. And then they would have the same super power is this file: you can put it on a flash drive and run it anywhere with no setup or installation.
[dead]
I have a fully functional copy of Minecraft in a single html file
talk about judging a book by its cover...
Typical HN passive aggressiveness sandwich. Nitpicking criticism with a half meant closing praise.
Cool comment, though — don’t mean to take away anything from it.
And Merry Christmas!
Lol, I was just disappointed because I was hoping to see a kanban board fully implemented in CSS/HTML. I do also dislike when nitpicky comments like this get upvoted.
[dead]
[dead]
[flagged]