Variating Kubernetes Cron schedules in Terraform

Ondřej Popelka
5 min readMay 18, 2024

--

I have a indexing process that is supposed to run approximately every half an hour. It takes a bunch of projects (whatever that is) identified by a common identifier (organization id), then it crunches their data for a couple of minutes (depending on the size of the project), generates an index and stores it in a cloud storage.

We use a Kubernetes cluster for running everything, so this obviously translates into a Kubernetes CronJob deployment. Everything is managed in Terraform so it boils down to a simple configuration of a kubernetes_cron_job_v1 resource. Now for the more interesting part.

The indexing process should run as often as possible, at the very least every half an hour. At the same time, the indexing process is not very fast and the runtime is quite varying. It can take anything between few seconds to lower tens of minutes in worst case. Even on the same organization, because it runs incrementally and the runtime depends on the number of changes since the last run. It is quite unpredictable. So there is a substantial risk that if we have one process for all organizations, it will no manage to process them in time.

Luckily, the organizations are completely isolated entities, so both the source data and the indexes can be read and written completely independently. So instead of running one job that indexes all of the organizations, let’s run multiple CronJobs in parallel for each organization with the following Terraform definition:

resource "kubernetes_cron_job_v1" "index_builder_cronjob" {
// The builder should run for each organization separately in parallel
for_each = toset(split(",", var.organization_ids))

metadata {
namespace = var.k8s_namespace
name = "index-builder"
}

spec {
schedule = "0,30 * * * *"
concurrency_policy = "Forbid"

job_template {
metadata { ... }
spec {
template {
metadata { ... }
spec {
restart_policy = "OnFailure"
container {
name = "builder"
image = "${var.builder_image_name}:${var.app_image_tag}"

env {
name = "ORGANIZATION_ID"
value = each.key
}

... rest of the definition ...
}
}
}
}
}
}
}

The input consists of a Terraform variable organization_ids which is a string of comma separated integer ids (e.g “7,20,863,195,81,3”)

Then I use the the for_each meta argument to generate as many CronJobs as there are organizations with each having the ORGANIZATION_ID environment set to the corresponding organization id. The schedule runs every thirty minutes (0th and 30th minute). concurrency_policy is set to Forbid because a job for a single organization must not run in parallel.

Easy Peasy Lemon Squeezy (source)

However at the same time indexing process is quite CPU heavy and memory heavy (8GB+). Running this thing for a couple of organizations triggers a massive scale-up of the Kubernetes cluster. Since I don’t really care about the exact time (it doesn’t matter when the index refreshes, the important part is that it should not be older than 30 mins) I would really want to set the schedule to something like:

resource "kubernetes_cron_job_v1" "index_builder_cronjob" {
for_each = toset(split(",", var.organization_ids))

spec {
schedule = "run about every thirty minutes, but do not start everything at once, you know"
concurrency_policy = "Forbid"

job_template {
...
}
}
}

That’s obviously Impossible™ with CronJobs and declarative Kuberentes template or Terraform configuration. But let’s see how far we can get.

  • Each indexer instance is identified by an organization id.
  • The id is integer.
  • The id is withing some range. Let’s say 1–1000.
  • The id is somewhat evenly distributed in that interval.

If all of this holds, we can use simple formula for linear scaling to scale the original range of ids into the range minutes. I prefer to write in this form:

y=(x−A)⋅(Z−A)/(B−Y)​+Y

Where:

  • x is the original value.
  • A and Z are the original range (in my case 1 and 100).
  • Y and B are the new range (in my case 0 and 59).

So, if we substitute the values into the formula we get:

y=(x−1)⋅(1000−1)(59−0)​+0

This will transform the range 1–1000 to range 0–59 so that gives us a starting minute of the CronJob. Now to make it run once again in another half hour. Let’s simply add 30 to that expression. If starting minute is 4, then second minute is 34. If starting minute is 51, then the second minute is 81.

Invalid Arguments

Luckily, this can be easily fixed by applying modulo to the second minute which gives us the remainder :

  • 34 mod 59 = 34 (starts at 4th and 34th minute),
  • 81 mod 59 = 22 (starts at 51st and 22nd minute)

Putting it all together

Computation of first minute (each.value is the id of organization):

min(59, floor((each.value - 1) * 59 / (100–1)))

Computation of second minute:

(min(59, floor((each.value - 1) * 59 / (100–1))) + 30) % 59

The min function is there to handle the case when the id is over the specified range (1–100) — it doesn’t let the minute overflow the value of 59. If that starts happening, all we need to do is enlarge that interval and it will cause schedules to be recomputed. It can be also easily detected, because multiple schedules will occupy the 59th minute.

If there is a risk of getting id = 0 or less, I can either adjust the interval or add a max(0, …) to the expression to clip the value at 0.

The floor function is there to round the transformed value. 100–1 defines the original interval and for clarity I intentionally leave it as 100–1 instead of writing 99.

The second minute adds 30 and returns the remainder of division by 59.

The result of all this is an expression in declarative form, that can be put into the Terraform configuration:

  spec {
schedule = format(
"%s,%s * * * *",
min(59, floor((each.value - 1) * 59 / (100 - 1))),
(min(59, floor((each.value - 1) * 59 / (100 - 1))) + 30) % 59
)
concurrency_policy = "Forbid"
...

Couple of sample schedules:

Id  | First minute  | Second minute
-----------------------------
4 | 4 | 34
14 | 8 | 38
45 | 27 | 57
87 | 52 | 23
99 | 59 | 30
105 | 59 | 30 (id outside of range)

Works like charm. The biggest issue could arise if the ids are not distributed somewhat evenly on the original scale. Then the solution depends on how the are distributed. Otherwise this does what it needs to — variates the schedules, so that they don’t all fall into the same minute. At the same time, the schedules are not completely random (as in unpredictable), which is actually a good thing.

--

--