Kubernetes Installation

A guide to installing Shoreline Agents in Kubernetes environments.

You can install Shoreline within your Kubernetes cluster using a few different methods:

Install with Helm

Helm simplifies the process of installing Prometheus node exporter and the Shoreline Agents.

Prerequisites
  1. Create the Prometheus node exporter with the prometheus-node-exporter Helm chart:

    $ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
        helm repo update && \
        helm install node-exporter prometheus-community/prometheus-node-exporter \
        --namespace="<namespace>" --create-namespace
    
    NAME: node-exporter
    LAST DEPLOYED: Tue Jun 29 10:15:29 2021
    NAMESPACE: acme
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    

    The specified namespace is used throughout the installation process, so set it to something appropriate such as your company name. In these examples, we'll use the acme namespace.

  2. Confirm that the node-exporter service is deployed:

    $ kubectl get svc -n acme
    
    NAME                                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
    node-exporter-prometheus-node-exporter   ClusterIP   10.98.96.198   <none>        9100/TCP   4m33s
    
  3. Get the endpoints for your <namespace>:

    $ kubectl get endpoints -n acme
    
    NAME                                     ENDPOINTS           AGE
    node-exporter-prometheus-node-exporter   192.168.49.2:9100   5m2s
    
  4. Add the Shoreline Helm chart repository:

    helm repo add shoreline 'https://raw.githubusercontent.com/shorelinesoftware/shoreline-client/master/' \
      --username '<shoreline_provided_username>' --password '<shoreline_provided_password>'
    
  5. Verify that the repository was added:

    $ helm repo list | grep shoreline
    
    shoreline  https://raw.githubusercontent.com/shorelinesoftware/shoreline-client/master/
    
  6. Create a local shoreline.yaml file to configure the Shoreline Helm chart:

    global:
      name: 'shoreline'
      # unique customer id for each customer. Replace this value
      customer_id: '<customer_id>'
      # unique customer key of each customer
      customer_secret: '<customer_secret>'
      # agent endpoint of the customer
      customer_endpoint: '<customer_endpoint>:443'
    # agent specific values
    agent:
      serviceaccount:
        # use for IRSA to access aws resources using service accounts. To enable IRSA
        # set the irsa flag to true and replace aws_role with iam role arn to be associated
        # with the service account
        irsa: true
        # a valid iam role arn required for IRSA.
        aws_role: ''
      daemonset:
        resources:
          limits:
            cpu: 500m
            memory: 500Mi
        image: 'docker.pkg.github.com/shorelinesoftware/shoreline-client/shoreline-client'
        tag: '<shoreline_provided_version_tag>'
        imageCredentials:
          server: 'https://docker.pkg.github.com/'
          username: '<shoreline_provided_username>'
          password: '<shoreline_provided_password>'
    # scraper config values
    exporters:
      scrapers:
        node_exporter:
          # namespace where the node exporter is running
          namespace: '<namespace>'
          # kubectl get svc -n <node-exporter-namespace>
          service: '<node_exporter_service_name>.<namespace>.svc.cluster.local'
          # name of the endpoint `kubectl get endpoints -n <node-exporter-namespace>`
          # required for prometheus auto discovery of node exporter
          regex: '<node_exporter_endpoint>'
        # enable or disable envoy metrics
        envoy:
          enabled: true
    
  7. Replace all <xyz> placeholders with appropriate values:

    • customer_id: Unique customer ID, e.g. acme
    • customer_secret: Unique customer secret
    • customer_endpoint: The secure endpoint, e.g. agent-gateway.shoreline-acme.io:443
    • shoreline_provided_version_tag: The tagged Agent version to be installed, as provided by Shoreline, e.g. release-0.20.0
    • shoreline_provided_username: The same repo username used previously
    • shoreline_provided_password: The same repo password used previously
    • namespace: The namespace defined in the first step, e.g. acme
    • node_exporter_service_name: The name returned by kubectl get svc -n <namespace>, e.g. node-exporter-prometheus-node-exporter
    • node_exporter_endpoint: The endpoint returned by kubectl get endpoints -n <namespace>, e.g. node-exporter-prometheus-node-exporter
  8. Finally, install the Shoreline Agent using the shoreline.yaml Helm configuration you created:

    $ helm install shoreline-agent -f shoreline.yaml --namespace shoreline \
        --create-namespace shoreline/shoreline-agent
    
    NAME: shoreline-agent
    LAST DEPLOYED: Tue Jun 29 11:31:46 2021
    NAMESPACE: acme
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
    

Manual Installation

For full control over the installation process you may opt to manually install Shoreline Agents on your Kubernetes cluster.

Create a Policy

Create an AWS IAM policy. Below you'll find instructions for both the AWS Console and the AWS CLI.

Configure Node Exporter (Optional)

  1. Create a node-exporter.yaml file with the following content:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: monitoring
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: node-exporter
      namespace: monitoring
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        app.kubernetes.io/component: node-exporter
        app.kubernetes.io/name: node-exporter
        k8s-app: node-exporter
      name: node-exporter
      namespace: monitoring
    spec:
      clusterIP: None
      ports:
        - name: http
          port: 9100
          protocol: TCP
          targetPort: 9100
      selector:
        k8s-app: node-exporter
      sessionAffinity: None
      type: ClusterIP
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      annotations:
        deprecated.daemonset.template.generation: '1'
      generation: 1
      labels:
        app.kubernetes.io/component: node-exporter
        app.kubernetes.io/name: node-exporter
        k8s-app: node-exporter
      name: node-exporter
      namespace: monitoring
    spec:
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: node-exporter
      template:
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: node-exporter
        spec:
          containers:
            - args:
                - --path.procfs=/host/proc
                - --path.sysfs=/host/sys
              image: quay.io/prometheus/node-exporter:v1.0.1
              imagePullPolicy: Always
              name: prometheus-node-exporter
              ports:
                - containerPort: 9100
                  hostPort: 9100
                  name: metrics
                  protocol: TCP
              resources:
                limits:
                  cpu: 400m
                  memory: 100Mi
                requests:
                  cpu: 400m
                  memory: 100Mi
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
                - mountPath: /host/proc
                  name: proc
                  readOnly: true
                - mountPath: /host/sys
                  name: sys
                  readOnly: true
          dnsPolicy: ClusterFirst
          hostNetwork: true
          hostPID: true
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          serviceAccount: node-exporter
          serviceAccountName: node-exporter
          terminationGracePeriodSeconds: 30
          volumes:
            - hostPath:
                path: /proc
                type: ''
              name: proc
            - hostPath:
                path: /sys
                type: ''
              name: sys
      updateStrategy:
        type: OnDelete
    
  2. Apply the node exporter configuration

    $ kubectl apply -f node-exporter.yaml
    
    namespace/monitoring created
    serviceaccount/node-exporter created
    service/node-exporter created
    daemonset.apps/node-exporter created
    

Configure the Metric Scraper

  1. Create a scraper-config.yaml file with the following initial content

    apiVersion: v1
    data:
      scraper.yml: |
        scrape_configs:
          - job_name: ''
    kind: ConfigMap
    metadata:
      name: scraper-config
      namespace: <company_name>
    
  2. Add one or more job_name sections with a configuration appropriate to your environment.

  3. Apply the scraper configuration

    $ kubectl apply -f scraper-config.yaml
    
    configmap/scraper-config created
    

Configure Shoreline

  1. Create a shoreline.yaml file with the following content:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: shoreline
    ---
    apiVersion: v1
    data:
      customer-secret: <shoreline_provided_customer_secret>
    kind: Secret
    metadata:
      name: customer-secret
      namespace: shoreline
    type: Opaque
    ---
    apiVersion: v1
    data:
      .dockerconfigjson: <shoreline_provided_agent_secret>
    kind: Secret
    metadata:
      name: shoreline-agent
      namespace: shoreline
    type: kubernetes.io/dockerconfigjson
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: shoreline-sa
      namespace: shoreline
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: shoreline-agent
    rules:
      - apiGroups:
          - ''
        resources:
          - endpoints
          - services
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - namespaces
          - pods
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - pods/exec
        verbs:
          - get
          - create
      - apiGroups:
          - ''
        resources:
          - nodes
          - nodes/metrics
        verbs:
          - get
          - list
          - patch
          - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: shoreline-sa-view-binding
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: shoreline-agent
    subjects:
      - kind: ServiceAccount
        name: shoreline-sa
        namespace: shoreline
    ---
    apiVersion: v1
    data:
      ca.pem: |
        <shoreline_provided_pem_certificate>
    kind: ConfigMap
    metadata:
      name: ca-pemstore
      namespace: shoreline
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        k8s-app: shoreline
      name: shoreline
      namespace: shoreline
    spec:
      selector:
        matchLabels:
          app: shoreline
      template:
        metadata:
          labels:
            app: shoreline
        spec:
          containers:
            - env:
                - name: ELIXIR_LOGGER_LEVEL
                  value: 'error'
                - name: NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: POD_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.podIP
                - name: BACKEND_ADDRESS
                  value: 'agent-gateway.shoreline-<cluster_name>.io:443'
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                - name: NODE_IP
                  valueFrom:
                    fieldRef:
                      fieldPath: status.hostIP
                - name: SSH_USERNAME
                  value: shoreline
                - name: SSH_PORT
                  value: '22'
                - name: EC2_METADATA_BASE_URL
                  value: 'http://169.254.169.254/latest/meta-data'
                - name: K8S_CACERT_PATH
                  value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
                - name: K8S_TOKEN_PATH
                  value: /var/run/secrets/kubernetes.io/serviceaccount/token
                - name: SECRET
                  valueFrom:
                    secretKeyRef:
                      name: customer-secret
                      key: customer-secret
                - name: SECRETS_DIRECTORY
                  value: '/agent/secrets'
                - name: CUSTOMER_ID
                  value: 'acme'
                - name: GODEBUG
                  value: madvdontneed=1
              image: >-
                docker.pkg.github.com/shorelinesoftware/shoreline-client/shoreline-client:<RELEASE_TAG>
              name: shoreline
              readinessProbe:
                tcpSocket:
                  port: 5789
                initialDelaySeconds: 5
                periodSeconds: 10
              livenessProbe:
                tcpSocket:
                  port: 5789
                initialDelaySeconds: 15
                periodSeconds: 20
              ports:
                - containerPort: 5051
                  name: agent-opservice
              resources:
                limits:
                  cpu: 1000m
                  memory: 1000Mi
              volumeMounts:
                - mountPath: /var/log
                  name: varlog
                - mountPath: /var/lib/docker/containers
                  name: varlibdockercontainers
                  readOnly: true
                - mountPath: /agent/secrets/ca_cert.crt
                  name: ca-pemstore
                  readOnly: false
                  subPath: ca.pem
                - name: host-ssh-volume
                  readOnly: true
                  mountPath: '/agent/.host_ssh'
                - name: scraper-config
                  mountPath: '/agent/etc/scraper.yml'
                  subPath: scraper.yml
          imagePullSecrets:
            - name: shoreline-agent
          serviceAccountName: shoreline-sa
          terminationGracePeriodSeconds: 30
          volumes:
            - hostPath:
                path: /var/log
              name: varlog
            - hostPath:
                path: /var/lib/docker/containers
              name: varlibdockercontainers
            - configMap:
                name: ca-pemstore
              name: ca-pemstore
            - name: host-ssh-volume
              hostPath:
                path: /home/shoreline/.ssh
                type: DirectoryOrCreate
            - name: scraper-config
              configMap:
                name: scraper-config
      updateStrategy:
        rollingUpdate:
          maxUnavailable: 3
        type: RollingUpdate
    
  2. Replace all <XYZ> placeholders with appropriate values:

    • shoreline_provided_customer_secret: Your Shoreline customer secret
    • shoreline_provided_agent_secret: The auth secret used by the Shoreline Agent
    • shoreline_provided_pem_certificate: A PEM certificate required by Shoreline
    • cluster_name: The Shoreline cluster name you're assigned to
  3. Apply the Shoreline configuration

    $ kubectl apply -f shoreline.yaml
    
    namespace/shoreline created
    secret/customer-secret created
    secret/shoreline-agent created
    serviceaccount/shoreline-sa created
    clusterrole.rbac.authorization.k8s.io/shoreline-agent created
    clusterrolebinding.rbac.authorization.k8s.io/shoreline-sa-view-binding created
    configmap/ca-pemstore created
    daemonset.apps/shoreline created
    

Request an Okta Invite

Once Shoreline is deployed within your Kubernetes cluster, Shoreline operators will provide you with an Okta invite for your initial user. This allows you to authenticate with your Shoreline cluster's endpoint, i.e.: https://<customer>.<region>.api.shoreline-<cluster_name>.io.

Please contact your Shoreline representative and inform them you're ready for an Okta invite.