It's not absurd. First of all I've been using immutable Linux for 3 years so running Irssi in a container makes the most sense. Of course I'd probably just run it inside a distrobox container instead. And also I've been using a shell server for irssi for many years so it's not that relevant.
But secondly, containerization, despite its vulnerabilities through the years, does add a layer of security to applications. And we must not forget that irc clients have been exploited in the past. Remember the old adage, never irc as root.
The replies talking about portability are wild, my irssi instance started on a Pentium 90, and is running on an AMD EPYC the two commands it actually took were:
For my homelab: portable state. I don't use this image specifically but I use many others.
I put docker-compose files in ~/configdata/_docker/
The docker-compose files always mount volumes inside the ~/configdata/ directory. So let's say irssi has a config directory to mount I'd mount it to ~/configdata/irssi/config/
Then I can just run a daily backup on ~/configdata/ using duplicati or whatever differential backup tool of your choice and be able to restore application state, easily move it to another server, etc.
Sure, this makes sense for a server, but irssi is a client. This is just a program running on your computer. You don't need a "honelab" or any nonsense.
Came here to ask why you'd want to run an app in docker. Genuinely don't get it. Sure the app doesn't touch the host system so there's isolation there, but the extra overhead doesn't seem justified to me. I'm not docker expert, so correct me if I'm wrong, but isn't this running a striped down version of Linux to run the app? Lighter than a full VM but... Yeah I don't get it.
Look at it the other way. Why muck up my OS with a bunch of tiny apps? Who knows what version I’ll pull in my repo today. Chances are good it’s outdated with weird patches.
The docker image is built by the devs. All the proper dependencies are baked into the image. It’s going to run exactly as intended every time, no surprises.
And I can pick up the docker file and my configs and run it exactly the same on any OS.
I love watching this in tech, the pendulum swings, this is static linking in another dress,
Soon everyone adopts this, and then someone complains “why is there 500 libc libraries on my machine” or “there was critical bug and I had to update 388 containers - and some maintainers didn’t update and it’s a giant mess!”
Then someone will invent dynamic underlying container sharing (tm) and the pendulum will swing the other way for a bit, and in 2032, one dev will paste your comment in a slightly different form - why muck up my mindvisor with a bunch of tiny apps? Isolated runtimes are built by the devs,
My god, we've discovered a genuine perpetual motion machine.
> this is static linking in another dress
Although static linking usually seems to result in small binaries that just run on the target machine while this needs all the Docket machinary (and the image sizes can get horrendous)
I think the real answer is that distro packaging sucks; it tends to involve arcane distro-specific tools and introduce as many or more bugs than it fixes (with the added problem of playing hot potato with the bug reports), on top of delaying updates. Really, what do you gain by using distro packages? (I know the answer is supposedly that you get a set of well-tested versions of your applications that play nice with each other, but that's rarely been delivered in practice)
Maintainers had a project where they ran everything in containers. The project had helped docker itself and the ecosystem by allowing some interesting software to be containerized.
I don't think it's being paranoid. It's a remotely controlled parser. Fuzzing has turned up some of bugs in irssi and weechat over the years. Things like malformed color codes, DCC filenames, or even basic protocol messages led to crashes.
I can see a use running in a Kubernetes cluster or something. Not really sure why you would but I'm sure someone, somewhere has found it useful before.
What I'm confused about is why it's notable enough to be on the front page of HN. If you needed this and you use K8s you could trivially write this Dockerfile yourself.
I can guess a reason: persistence of your IRC server connection(s), across device sessions, and maybe switchable between devices. Without using an IRC bouncer.
So this this would a turnkey way to run this somewhere centralized and persistent, and then you connect to it however you connect to that Docker container (e.g., SSH, remote desktop of some kind).
Of course, a non-Docker way to achieve simple persistence would be to just run a character-terminal IRC client in an SSH-able shell account (or VPS or AWS EC2), inside a `screen` or `tmux` session that can be detached and reattached when SSH-ing in from whatever devices.
(Persistence of your IRC server connections means things like you can see what you missed in scrollback, you aren't being noisy in your channels with join and part messages, you preserve your channel operator status and other channel modes without relying on bots, and you aren't leaking so much info about your movements in real time to random crazy people who hang out in Internet chat rooms.)
(Also, early on, if your leet channels attracted trolls, remaining connected meant that whatever automated countermeasures your client had could help defend the channel. Also, the more people who had channel operator status, the harder it would be for an attacker who, say, "netsplit" to hack ops, to de-op them all, before a remaining op's scripts detected the mass-deop attack, and took out the attacker. Also, your persistence bouncer or shell account obscured your real IP address, so if an attacker targeted your client's IP addr but not your home addr, such as with a protocol or flood attack, you could more likely get back on quickly. Trolls were often annoying, but it was also cyberpunk satisfying when your channel made short work of them.)
Back in the day I would keep an old PC running Linux in the closet just for my IRC and shell needs. Having a vanity domain name was a must if you were lucky enough to have a static IP. I remember undernet adding support to hide your IP once you created an account.
Poor mans abstraction. Docker swarm makes a cheap node pool from random hardware. Compose makes all your apps and config live in git.
You don't _need_ docker, but if you are already set up for it then it's a boon. Adding an app for me to be very available across a fleet of hardware with ceph backed storage is a one-liner.
Don't be ridiculous, IRC is not a protocol that remembers, you need High Availability otherwise if the IRC client goes down you've lost important messages from bloodninja that you can never find again.
I find part of the fun of dockerising small apps is in trying to get the image as small as possible with as few files in it as I can. This one looks like it still contains a lot of stuff that's not needed.
I could go smaller using the light package, but I wanted a full featured Exim in the container. And I'm using the testing image so it keeps up with the latest version of Exim.
by just apt-get installig irssi, those 'few files' are even shared with other applications, making the whole thing even smaller and without the docker overhead.
I use thelounge, it's pretty amazing except it hogs up a lot of disk space with the logs. I ve also been really sad that you can't update it without restarting the process which means I lose everything I am connected to (which on IRc sometimes isnt ideal)
Is it possible to run multiple instances of this, so I could join an irc server that I also spin up in another docker instance and then have a rousing conversation with myself???
I'm curious what makes you think "multiple instances" when you see this? You realise operating systems have been able to run multiple processes for decades, I assume? And you probably think nothing of running multiple shells, web browsers etc? None of this has anything to do with Docker.
I've noticed a lot of people seem to think Docker is some dark art technology when it's really just an amalgamation of various things that you can do anyway.
Incidentally, when I was on dial up and before I had a home network (just one family PC) I started to learn networking by running things like IRC servers (unrealircd to be specific) and multiple clients locally (including eggies). I was really talking to myself in every sense. Was quite fun to give myself ops etc.
Better would be a docker container for an IRC server. Something using a modern approach where you could have link attachments, replies for message threads etc. An IRC slack alternative.
This seems absurd. Just apt install irssi. Why Docker for such a simple self contained tiny app?
It's not absurd. First of all I've been using immutable Linux for 3 years so running Irssi in a container makes the most sense. Of course I'd probably just run it inside a distrobox container instead. And also I've been using a shell server for irssi for many years so it's not that relevant.
But secondly, containerization, despite its vulnerabilities through the years, does add a layer of security to applications. And we must not forget that irc clients have been exploited in the past. Remember the old adage, never irc as root.
The replies talking about portability are wild, my irssi instance started on a Pentium 90, and is running on an AMD EPYC the two commands it actually took were:
1) scp
2) dpkg --set-selections
Reminds me of the "But I Don't Want To Cure Cancer. I Want To Turn People Into Dinosaurs" meme[1]
They don't want to apt install, they want to use docker :-)
[1] https://knowyourmeme.com/memes/but-i-dont-want-to-cure-cance...
For my homelab: portable state. I don't use this image specifically but I use many others.
I put docker-compose files in ~/configdata/_docker/
The docker-compose files always mount volumes inside the ~/configdata/ directory. So let's say irssi has a config directory to mount I'd mount it to ~/configdata/irssi/config/
Then I can just run a daily backup on ~/configdata/ using duplicati or whatever differential backup tool of your choice and be able to restore application state, easily move it to another server, etc.
For software designed to run under your user account like irssi it's pretty much the same, look in ~/.config and ~/.local/share
Sure, this makes sense for a server, but irssi is a client. This is just a program running on your computer. You don't need a "honelab" or any nonsense.
Came here to ask why you'd want to run an app in docker. Genuinely don't get it. Sure the app doesn't touch the host system so there's isolation there, but the extra overhead doesn't seem justified to me. I'm not docker expert, so correct me if I'm wrong, but isn't this running a striped down version of Linux to run the app? Lighter than a full VM but... Yeah I don't get it.
I run a ton of apps like this.
Look at it the other way. Why muck up my OS with a bunch of tiny apps? Who knows what version I’ll pull in my repo today. Chances are good it’s outdated with weird patches.
The docker image is built by the devs. All the proper dependencies are baked into the image. It’s going to run exactly as intended every time, no surprises.
And I can pick up the docker file and my configs and run it exactly the same on any OS.
I love watching this in tech, the pendulum swings, this is static linking in another dress,
Soon everyone adopts this, and then someone complains “why is there 500 libc libraries on my machine” or “there was critical bug and I had to update 388 containers - and some maintainers didn’t update and it’s a giant mess!”
Then someone will invent dynamic underlying container sharing (tm) and the pendulum will swing the other way for a bit, and in 2032, one dev will paste your comment in a slightly different form - why muck up my mindvisor with a bunch of tiny apps? Isolated runtimes are built by the devs,
And so on, back and forward forever
> And so on, back and forward forever
My god, we've discovered a genuine perpetual motion machine.
> this is static linking in another dress
Although static linking usually seems to result in small binaries that just run on the target machine while this needs all the Docket machinary (and the image sizes can get horrendous)
We need a static linux distro, because i prefer to have a portable app that works on all linux distros.
Sounds like you need an APE
All of the things you describe are just "package manager, but outside distro control," which is fine I guess but not really a meaningful answer.
I think the real answer is that distro packaging sucks; it tends to involve arcane distro-specific tools and introduce as many or more bugs than it fixes (with the added problem of playing hot potato with the bug reports), on top of delaying updates. Really, what do you gain by using distro packages? (I know the answer is supposedly that you get a set of well-tested versions of your applications that play nice with each other, but that's rarely been delivered in practice)
Maintainers had a project where they ran everything in containers. The project had helped docker itself and the ecosystem by allowing some interesting software to be containerized.
I ran irssi for years. I agree... Maybe being paranoid about security?
I don't think it's being paranoid. It's a remotely controlled parser. Fuzzing has turned up some of bugs in irssi and weechat over the years. Things like malformed color codes, DCC filenames, or even basic protocol messages led to crashes.
I personally use weechat inside nsjail on a raspberry pi (isolated rpi is enough here, but just for fun): https://github.com/google/nsjail/tree/master/configs
Containers are not the best option for security. VMs and/or a MAC are better.
What do you mean by "MAC"?
Better yet, apt install weechat!
I can see a use running in a Kubernetes cluster or something. Not really sure why you would but I'm sure someone, somewhere has found it useful before.
What I'm confused about is why it's notable enough to be on the front page of HN. If you needed this and you use K8s you could trivially write this Dockerfile yourself.
I can guess a reason: persistence of your IRC server connection(s), across device sessions, and maybe switchable between devices. Without using an IRC bouncer.
So this this would a turnkey way to run this somewhere centralized and persistent, and then you connect to it however you connect to that Docker container (e.g., SSH, remote desktop of some kind).
Of course, a non-Docker way to achieve simple persistence would be to just run a character-terminal IRC client in an SSH-able shell account (or VPS or AWS EC2), inside a `screen` or `tmux` session that can be detached and reattached when SSH-ing in from whatever devices.
(Persistence of your IRC server connections means things like you can see what you missed in scrollback, you aren't being noisy in your channels with join and part messages, you preserve your channel operator status and other channel modes without relying on bots, and you aren't leaking so much info about your movements in real time to random crazy people who hang out in Internet chat rooms.)
(Also, early on, if your leet channels attracted trolls, remaining connected meant that whatever automated countermeasures your client had could help defend the channel. Also, the more people who had channel operator status, the harder it would be for an attacker who, say, "netsplit" to hack ops, to de-op them all, before a remaining op's scripts detected the mass-deop attack, and took out the attacker. Also, your persistence bouncer or shell account obscured your real IP address, so if an attacker targeted your client's IP addr but not your home addr, such as with a protocol or flood attack, you could more likely get back on quickly. Trolls were often annoying, but it was also cyberpunk satisfying when your channel made short work of them.)
Back in the day I would keep an old PC running Linux in the closet just for my IRC and shell needs. Having a vanity domain name was a must if you were lucky enough to have a static IP. I remember undernet adding support to hide your IP once you created an account.
Poor mans abstraction. Docker swarm makes a cheap node pool from random hardware. Compose makes all your apps and config live in git.
You don't _need_ docker, but if you are already set up for it then it's a boon. Adding an app for me to be very available across a fleet of hardware with ceph backed storage is a one-liner.
> Adding an app for me to be very available across a fleet of hardware with ceph backed storage is a one-liner.
But irssi is a chat client:
0 - https://irssi.org/Don't be ridiculous, IRC is not a protocol that remembers, you need High Availability otherwise if the IRC client goes down you've lost important messages from bloodninja that you can never find again.
[dead]
My production k8s cluster doesn't have apt. Now I can deploy this!
I find part of the fun of dockerising small apps is in trying to get the image as small as possible with as few files in it as I can. This one looks like it still contains a lot of stuff that's not needed.
For example, my exim image https://hub.docker.com/r/grepular/exim4 is built like this: https://gitlab.com/grepular/docker-exim4/-/blob/main/Dockerf... - The final image only contains the necessary executables, shared lib files, CA certs, timezone files, a few other bits and nothing else.
Wonder if you could go smaller if you used debian:stable-slim and used exim-daemon-light. Also dropping SUID if you don’t absolutely need it.
I could go smaller using the light package, but I wanted a full featured Exim in the container. And I'm using the testing image so it keeps up with the latest version of Exim.
by just apt-get installig irssi, those 'few files' are even shared with other applications, making the whole thing even smaller and without the docker overhead.
used irssi for many years, never needed a docker. *shrug
I use thelounge, it's pretty amazing except it hogs up a lot of disk space with the logs. I ve also been really sad that you can't update it without restarting the process which means I lose everything I am connected to (which on IRc sometimes isnt ideal)
Is it possible to run multiple instances of this, so I could join an irc server that I also spin up in another docker instance and then have a rousing conversation with myself???
I'm curious what makes you think "multiple instances" when you see this? You realise operating systems have been able to run multiple processes for decades, I assume? And you probably think nothing of running multiple shells, web browsers etc? None of this has anything to do with Docker.
I've noticed a lot of people seem to think Docker is some dark art technology when it's really just an amalgamation of various things that you can do anyway.
Incidentally, when I was on dial up and before I had a home network (just one family PC) I started to learn networking by running things like IRC servers (unrealircd to be specific) and multiple clients locally (including eggies). I was really talking to myself in every sense. Was quite fun to give myself ops etc.
Absolutely!
I remember using this to collaborate with docker maintainers about 10 years ago now. Good old days.
I remember interviewing for an internship at about that time and they told me to help answer questions on their IRC.
Better would be a docker container for an IRC server. Something using a modern approach where you could have link attachments, replies for message threads etc. An IRC slack alternative.
Is there no dockerized irc server that exists or are you thinking about something else here?
There are, of course: ergo and inspircd work well