Play all audios:
Short answer: YOU COULD, BUT YOU REALLY, REALLY SHOULDN’T. I’ve been playing around with GitHub Actions a lot recently, and I’ve found it to be remarkably powerful, matching all but a few of
the features from dedicated CI providers, and with a generous free tier, particularly for open source repos. While working through the workflow trigger docs, I noticed that you could set up
a job with a cron schedule that triggers every 5 minutes. This is incidentally the same frequency offered by Uptime Robot for free. Which got me wondering if it was possible to use Actions
for monitoring a website’s uptime. In theory, you could create a Workflow that was triggered every 5 minutes and hit a URL with _curl_, failing the job if the request failed or returned an
invalid status code. Downtime would then be represented as a failing test, with e-mail notifications built right into the platform. But would this be a reasonable substitute for a dedicated,
for-purpose monitoring solution? The first concern that came to mind was the reliability of Actions. If builds failed for spurious reasons, you might stop paying attention when got a real
failure notification. Related to this was the latency between jobs. If you wanted a job to run every 5 minutes, would it actually be running every 5 minutes? To test this reliability, I set
up a repo with a workflow that recorded the epoch time to a file every 5 minutes. I left this running for 11 days (I had planned on just 7, but got distracted). During this time there were
2150 runs of the workflow, and 2147 times recorded. There were three spurious failures, only one of which corresponded to an Actions incident on the GitHub Status page. This is about a
99.87% success rate, pretty decent. Downloading the list of recorded times, and dropping them into a spreadsheet, a plot of the _duration between recorded runs_ showed something surprising.
The majority of runs were separated by a little over 5 minutes, as expected, but a decent number were delayed up to 20 minutes, with regular spikes of over 40 minutes. All in all, less than
20% of runs were started within 6 minutes of the previous one, and only 88% started within 10 minutes. The 40 minute spikes all seemed to occur at midnight UTC, which suggests to me that
there is some kind of maintenance task being performed at this time that will delay new jobs from running. Irregular sampling and predictable 40 minute windows without any monitoring at all
is certainly not going to give confidence in your uptime metrics. And it’s kind of unreasonable to expect otherwise, since Actions wasn’t designed for this. When you’re running CI, these
kinds of delays are annoying, but acceptable, especially when you’re getting it super cheap, if not free, and sharing resources with countless other developers. But just in case you’re
thinking of some clever ways to work around this, perhaps with long-running jobs and complex leader-election mechanisms to hand over control from one job to another, there are some other,
more obvious drawbacks. Using Actions in this way degrades your experience in the GitHub UI, clogging your activity feed with mention of every time the workflow updates its records: Finally,
and crucially, it’s very likely against the terms of service. > Actions should not be used for: > … > serverless computing; > … > any other activity unrelated to the
production, testing, deployment, > or publication of the software project associated with the > repository where GitHub Actions are used. It could be argued that monitoring is
considered “production” or “testing”, but I wouldn’t want to risk having my monitoring discontinued, in worst case with zero notice. GitHub Actions is a CI tool. If you’re thinking of using
it for any use cases other than CI, you should probably stop and look for purpose-built alternatives.