My kids demand SLOs stricter than Moon exploration technology, so I had to monitor our family’s Minecraft server Minecraft server like a pro. As luck would have it, I am one.
> I am a man of simple tastes, and running the “vanilla” Minecraft server as a Systemd unit on a Linux VM in the cloud
Minecraft is famously under-optimized and needy in terms of CPU frequency.
If running a vanilla (no server mods) version, then using something optimized, like PaperMC is a better idea for datacenter VMs.
(Until you need to dupe sand or something.)
The other route is installing a bunch of optimization mods - some really do help.
People love to bother about Java MC performance, but I ran a modded Tekkit sever for like 10 years on a base Digital Ocean VM. Shoutout to Digital Ocean for having no impactful changes for 10 years too. They give me a VM, I run the thing, life is good.
From my understanding, Paper and the like are good for Minecraft servers focused around specific mini-games (rather than freedorm building), and are the only sensible choice for servers with many people (or not that many people, but really underpowered hardware).
However, they may be a problem if players are sensitive to possible non-vanilla behaviour (as you mentioned, and it’s not limited to cheaty duping). Thankfully, spinning up a server with a selection of performance mods is very easy these days. Various tricks like pre-generating chunks in advance also help.
Monitoring and metric collection makes a lot of sense when you run a production system, or a personal but critical system.
Promoting a telemetry solution when it comes to a hobby server, which you host for yourself and which can’t bankrupt you by running up a massive AWS bill, doesn’t seem to make much sense when simply bottling it up in Docker and being able to restart or recreate at will is enough (mount volumes for logs and persistent data, back it up, and you’re good).
With games like Minecraft in particular there’s value in being able to have multiple servers with different worlds, perhaps different mods, etc. If you decide not to have more servers because they are snowflakes you do not have time to set up monitoring for then you rob yourself and your players of the opportunity to have more fun.
Furthermore, containerizing it allows you to upgrade as new game versions come out quickly by simply spinning up a new container with your preexisting world as a test, and you get you basic system resource usage monitoring built-in.
What I think could be a more interesting exercise is a dashboard for friends or family that allows to manage the lifetime and configuration of their respective containers.
Implementing proper monitoring in a toy system doesn't prepare you to do it in a massive critical system, but at least you may had learn something in the process, and notice things that in big scale may not be as evident.
In any case, fun starts when the system have more interdependent components.
The goal of this article is to show you how to integrate with this service from just about anything. It's an ad that was fun to make as a hobby project. I doubt the goal was ever to set up a fully integrated Minecraft monitoring pipeline. At best, this is an employee at this company just decided to show the flexibility of their product by integrating with a random piece of kit they like.
Luckily, all of the interesting components are existing third party libraries so if you don't want to use their SaaS service, you can build your own Minecraft dashboard pretty easily.
I’ve recently added telemetry to some “toy” apps at my house because a power outage or other unforeseen issue has caused things like my Siri enabled garage doors to stop working. Now I get alerts through grafana and telegram for basically free which comes in handy.
I am currently planning adding monitoring to some toy apps I hosted on a raspberry pi cluster. The intent is that this might safe me time and stress further down the road. If a new version makes performance worse, I want to see that in the data. If resource needs go up, I want to know that before it's time to move, so that I can plan without any kind of scheduling stress. (I also want to do this in part as an exercise which is partial motivation for the cluster and most things I built that run on it. But don't tell anyone!)
Setting up telemetry is really easy if you’ve done it before and it’s a learning opportunity if you haven’t.
I have Dockerfiles from 10 years ago for Grafana and a time-series DB so basically you learn it once and you can bang out basic telemetry infra in an hour afterwards.
And I still actually use InfluxDB and Grafana for my hobby stuff. My current Dockerfiles just look like my old ones…
For this, I have the impression that https://github.com/dirien/minectl might be very close to what you are thinking. I did not try it, but took the Minecraft Exporter from it and used in the setup.
> The minecraft-prometheus-exporter ... which uses Fabric, another way to run Minecraft servers with mods. Like Bukkit, Fabric was not an option for me.
Forge and its recent fork Neoforge are supported too.
darknavi|9 months ago
No C# in Bedrock. No Java unless you're talking about the Android versions. Very little C.
It's mostly C++.
mmanciop|9 months ago
doabell|9 months ago
Minecraft is famously under-optimized and needy in terms of CPU frequency. If running a vanilla (no server mods) version, then using something optimized, like PaperMC is a better idea for datacenter VMs. (Until you need to dupe sand or something.)
The other route is installing a bunch of optimization mods - some really do help.
ehnto|9 months ago
strogonoff|9 months ago
However, they may be a problem if players are sensitive to possible non-vanilla behaviour (as you mentioned, and it’s not limited to cheaty duping). Thankfully, spinning up a server with a selection of performance mods is very easy these days. Various tricks like pre-generating chunks in advance also help.
strogonoff|9 months ago
Promoting a telemetry solution when it comes to a hobby server, which you host for yourself and which can’t bankrupt you by running up a massive AWS bill, doesn’t seem to make much sense when simply bottling it up in Docker and being able to restart or recreate at will is enough (mount volumes for logs and persistent data, back it up, and you’re good).
With games like Minecraft in particular there’s value in being able to have multiple servers with different worlds, perhaps different mods, etc. If you decide not to have more servers because they are snowflakes you do not have time to set up monitoring for then you rob yourself and your players of the opportunity to have more fun.
Furthermore, containerizing it allows you to upgrade as new game versions come out quickly by simply spinning up a new container with your preexisting world as a test, and you get you basic system resource usage monitoring built-in.
What I think could be a more interesting exercise is a dashboard for friends or family that allows to manage the lifetime and configuration of their respective containers.
gmuslera|9 months ago
In any case, fun starts when the system have more interdependent components.
jeroenhd|9 months ago
Luckily, all of the interesting components are existing third party libraries so if you don't want to use their SaaS service, you can build your own Minecraft dashboard pretty easily.
dpe82|9 months ago
koinedad|9 months ago
ajmurmann|9 months ago
Am I misguided?
harrall|9 months ago
I have Dockerfiles from 10 years ago for Grafana and a time-series DB so basically you learn it once and you can bang out basic telemetry infra in an hour afterwards.
And I still actually use InfluxDB and Grafana for my hobby stuff. My current Dockerfiles just look like my old ones…
mmanciop|9 months ago
cpburns2009|9 months ago
Forge and its recent fork Neoforge are supported too.
unknown|9 months ago
[deleted]
Lirael|9 months ago
[deleted]
Calliope1|9 months ago
[deleted]
Yasuraka|9 months ago
unknown|9 months ago
[deleted]
acedTrex|9 months ago