Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dhall-kubernetes support for "schemas" #84

Merged
merged 4 commits into from
Nov 18, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
19 changes: 19 additions & 0 deletions dhall-kubernetes-generator/src/Main.hs
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
{-# LANGUAGE OverloadedLists #-}

module Main (main) where

import qualified Data.Map.Strict as Data.Map
Expand Down Expand Up @@ -74,15 +76,32 @@ main = do
let path = "./defaults" Turtle.</> Turtle.fromText (name <> ".dhall")
writeDhall path expr

let toSchema (ModelName key) _ _ =
Dhall.RecordLit
[ ("Type", Dhall.Embed (Convert.mkImport ["types", ".."] (key <> ".dhall")))
, ("default", Dhall.Embed (Convert.mkImport ["defaults", ".."] (key <> ".dhall")))
]

let schemas = Data.Map.intersectionWithKey toSchema types defaults

-- Output schemas that combine both the types and defaults
Turtle.mktree "schemas"
for_ (Data.Map.toList schemas) $ \(ModelName name, expr) -> do
let path = "./schemas" Turtle.</> Turtle.fromText (name <> ".dhall")
writeDhall path expr

-- Output the types record, the defaults record, and the giant union type
let objectNames = Data.Map.keys types
typesMap = Convert.getImportsMap objectNames "types" $ Data.Map.keys types
defaultsMap = Convert.getImportsMap objectNames "defaults" $ Data.Map.keys defaults
schemasMap = Convert.getImportsMap objectNames "schemas" $ Data.Map.keys schemas

typesRecordPath = "./types.dhall"
typesUnionPath = "./typesUnion.dhall"
defaultsRecordPath = "./defaults.dhall"
schemasRecordPath = "./schemas.dhall"

writeDhall typesUnionPath (Dhall.Union $ fmap Just typesMap)
writeDhall typesRecordPath (Dhall.RecordLit typesMap)
writeDhall defaultsRecordPath (Dhall.RecordLit defaultsMap)
writeDhall schemasRecordPath (Dhall.RecordLit schemasMap)
104 changes: 104 additions & 0 deletions examples/aws-iam-authenticator-chart.dhall
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
let kubernetes = ../schemas.dhall

let release = "wintering-rodent"

let name = "aws-iam-authenticator"

let fullName = "${release}-${name}"

let version = "0.1.1"

let chart = "${name}-${version}"

let heritage = "dhall"

in kubernetes.DaemonSet::{
, metadata = kubernetes.ObjectMeta::{
, name = "AWS IAM authenticator metadata"
, labels = toMap
{ app = name
, chart = chart
, release = release
, heritage = heritage
}
}
, spec = Some kubernetes.DaemonSetSpec::{
, updateStrategy = Some kubernetes.DaemonSetUpdateStrategy::{
, type = Some "RollingUpdate"
}
, template = kubernetes.PodTemplateSpec::{
, metadata = kubernetes.ObjectMeta::{
, name = name
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is one of the places where I think you can ommit the metadata.name attribute
This is a manifestation of #8 (comment) I think the ground rule is that only toplevel objects need name, any template objects can (and must?) omit it instead.

I think if you explicitly set metadata.name here, really weird things might happen. Every pod will have the same name, e.g. there will never be more than 1 pod created eventhough the daemonset wants to create more than 1.

Also I think you must set template.selector otherwise the pods that the daemonset will create will not associate with the daemonset (See https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
It used to be the case that if you left out selector that it would be implicitly created, but I think that behaviour was removed recently. At least for Deployment; maybe not for daemonset.

I'm poking @akshaymankar here just to be sure.

Copy link
Member

@arianvp arianvp Oct 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bug that we should fix. name should not be a required attribute of ObjectMeta. The bug issue is #8

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I only added the name field because the type required it

For reference, this is based off of: https://github.com/helm/charts/blob/550096fcda27d7637a7a066240c61a4c6cb61f21/stable/aws-iam-authenticator/templates/daemonset.yaml

Copy link
Member

@arianvp arianvp Oct 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yeh that chart uses an older version of the DaemonSet API. Ours requires the selector: https://github.com/dhall-lang/dhall-kubernetes/blob/master/types/io.k8s.api.apps.v1.DaemonSetSpec.dhall so your example will hopefully give you a nice type error! (Woohoo dhall types!)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@arianvp: The example type-checks because the default record for DaemonSetSpec defines an empty selector field

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahh yes. I remember now. We have a bug open for this as well :) #78

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if you explicitly set metadata.name here, really weird things might happen.

I checked with a daemonset, nothing weird happened, the controller just ignored the name field while creating pods. But it might confuse somebody reading a config.

Also I think you must set template.selector

This is true, if you don't set it, it is a validation error, but only for daemonsets. Not sure if this should be a type error given the OpenAPI spec doesn't say anything about it.

, annotations = toMap
{ `scheduler.alpha.kubernetes.io/critical-pod` = ""
}
, labels = toMap
{ app = name
, release = release
}
}
, spec = Some kubernetes.PodSpec::{
, hostNetwork = Some True
, nodeSelector = toMap
{ `node-role.kubernetes.io/master` = ""
}
, tolerations =
[ kubernetes.Toleration::{
, effect = Some "NoSchedule"
, key = Some "node-role.kubernetes.io/master"
}
, kubernetes.Toleration::{
, effect = Some "CriticalAddonsOnly"
, key = Some "Exists"
}
]
, containers =
[ kubernetes.Container::{
, name = fullName
, image = Some "gcr.io/heptio-images/authenticator:v0.1.0"
, args =
[ "server"
, "--config=/etc/aws-iam-authenticator/config.yaml"
, "--state-dir=/var/aws-iam-authenticator"
, "--generate-kubeconfig=/etc/kubernetes/aws-iam-authenticator/kubeconfig.yaml"
]
, volumeMounts =
[ kubernetes.VolumeMount::{
, name = "config"
, mountPath = "/etc/aws-iam-authenticator/"
}
, kubernetes.VolumeMount::{
, name = "state"
, mountPath = "/var/aws-iam-authenticator/"
}
, kubernetes.VolumeMount::{
, name = "output"
, mountPath = "/etc/kubernetes/aws-iam-authenticator/"
}
]
}
]
, volumes =
[ kubernetes.Volume::{
, name = "config"
, configMap = Some kubernetes.ConfigMapVolumeSource::{
, name = Some fullName
}
}
, kubernetes.Volume::{
, name = "output"
, hostPath = Some kubernetes.HostPathVolumeSource::{
, path = "/srv/kubernetes/aws-iam-authenticator/"
}
}
, kubernetes.Volume::{
, name = "state"
, hostPath = Some kubernetes.HostPathVolumeSource::{
, path = "/srv/kubernetes/aws-iam-authenticator/"
}
}
]
}
}
}
}
Loading