Run Gladys on Kubernetes

Hi !

I have been struggling installing Gladys on Kubernetes without success for now…
I am not very familiar with nodeJS so pardon my ignorance if I am missing something obvious :slight_smile:

I am deploying Gladys using the official Docker image and a simple deployment resource.
To access, I am gonna use an ingress (and an ingress controller) but for now, Gladys does not start…

Configuration is the default, I used pretty much the same options than in the documentation (except for running privileged and mounting the docker socket for obvious security reason).

When starting, I got the following logs:

> [email protected] start:prod /src/server
> npm run db-migrate:prod && cross-env NODE_ENV=production node index.js


> [email protected] db-migrate:prod /src/server
> cross-env NODE_ENV=production node_modules/.bin/sequelize db:migrate


Sequelize CLI [Node: 12.13.0, CLI: 5.5.1, ORM: 4.44.3]

Loaded configuration file "config/config.js".
Using environment "production".
No migrations were executed, database schema was already up to date.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] start:prod: `npm run db-migrate:prod && cross-env NODE_ENV=production node index.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] start:prod script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2019-11-18T01_03_42_470Z-debug.log

Database is already up to date because it has been created by a previous failed run (pod is being restarted over and over with the same data).

I used an emptyDir to be able to access the log file (under /root/.npm/_logs) when the pod crashes and restarts. So here is the content:

/src/server # cat /root/.npm/_logs/2019-11-18T01_03_42_470Z-debug.log
0 info it worked if it ends with ok
1 verbose cli [ '/usr/local/bin/node', '/usr/local/bin/npm', 'run', 'start:prod' ]
2 info using [email protected]
3 info using [email protected]
4 verbose run-script [ 'prestart:prod', 'start:prod', 'poststart:prod' ]
5 info lifecycle [email protected]~prestart:prod: [email protected]
6 info lifecycle [email protected]~start:prod: [email protected]
7 verbose lifecycle [email protected]~start:prod: unsafe-perm in lifecycle true
8 verbose lifecycle [email protected]~start:prod: PATH: /usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/node-gyp-bin:/src/server/node_modules/.bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
9 verbose lifecycle [email protected]~start:prod: CWD: /src/server
10 silly lifecycle [email protected]~start:prod: Args: [
10 silly lifecycle   '-c',
10 silly lifecycle   'npm run db-migrate:prod && cross-env NODE_ENV=production node index.js'
10 silly lifecycle ]
11 silly lifecycle [email protected]~start:prod: Returned: code: 1  signal: null
12 info lifecycle [email protected]~start:prod: Failed to exec start:prod script
13 verbose stack Error: [email protected] start:prod: `npm run db-migrate:prod && cross-env NODE_ENV=production node index.js`
13 verbose stack Exit status 1
13 verbose stack     at EventEmitter.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/index.js:332:16)
13 verbose stack     at EventEmitter.emit (events.js:210:5)
13 verbose stack     at ChildProcess.<anonymous> (/usr/local/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:55:14)
13 verbose stack     at ChildProcess.emit (events.js:210:5)
13 verbose stack     at maybeClose (internal/child_process.js:1021:16)
13 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:283:5)
14 verbose pkgid [email protected]
15 verbose cwd /src/server
16 verbose Linux 4.15.0-70-generic
17 verbose argv "/usr/local/bin/node" "/usr/local/bin/npm" "run" "start:prod"
18 verbose node v12.13.0
19 verbose npm  v6.12.0
20 error code ELIFECYCLE
21 error errno 1
22 error [email protected] start:prod: `npm run db-migrate:prod && cross-env NODE_ENV=production node index.js`
22 error Exit status 1
23 error Failed at the [email protected] start:prod script.
23 error This is probably not a problem with npm. There is likely additional logging output above.
24 verbose exit [ 1, true ]

I don’t think it’s needed nor relevant but I am using:

  • Kubernetes version 1.16.1
  • Docker version 18.06.2-ce

Kubernetes manifests used

---
# Source: gladys/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: g-gladys
  labels:
    app.kubernetes.io/name: gladys
    helm.sh/chart: gladys-0.1.0
    app.kubernetes.io/instance: g
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
---
# Source: gladys/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: g-gladys
  labels:
    app.kubernetes.io/name: gladys
    helm.sh/chart: gladys-0.1.0
    app.kubernetes.io/instance: g
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: gladys
    app.kubernetes.io/instance: g
---
# Source: gladys/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: g-gladys
  labels:
    app.kubernetes.io/name: gladys
    helm.sh/chart: gladys-0.1.0
    app.kubernetes.io/instance: g
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: gladys
      app.kubernetes.io/instance: g
  template:
    metadata:
      labels:
        app.kubernetes.io/name: gladys
        app.kubernetes.io/instance: g
    spec:
      serviceAccountName: g-gladys
      securityContext:{}
      containers:
        - name: gladys
          securityContext:{}
          image: "gladysassistant/gladys:4.0.0-beta-amd64"
          imagePullPolicy: IfNotPresent
          env:
            - name: NODE_ENV
              value: production
            - name: SERVER_PORT
              value: "80"
            - name: TZ
              value: Europe/Paris
            - name: SQLITE_FILE_PATH
              value: /var/lib/gladysassistant/gladys-production.db
          volumeMounts:
            - name: db
              mountPath: /var/lib/gladysassistant
            - name: logs
              mountPath: /root/.npm/_logs/
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
            initialDelaySeconds: 60
            periodSeconds: 5
          readinessProbe:
            httpGet:
              path: /
              port: http
            initialDelaySeconds: 45
            periodSeconds: 5
          resources:
            limits:
              cpu: 200m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 128Mi

      volumes:
        - name: db
          emptyDir: {}
        - name: logs
          emptyDir: {}
---
# Source: gladys/templates/ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: g-gladys
  labels:
    app.kubernetes.io/name: gladys
    helm.sh/chart: gladys-0.1.0
    app.kubernetes.io/instance: g
    app.kubernetes.io/version: "1.0"
    app.kubernetes.io/managed-by: Tiller
spec:
  rules:
    - host: "gladys.home"
      http:
        paths:
          - path: /
            backend:
              serviceName: g-gladys
              servicePort: 80

Hi!

Thanks for your message here :slight_smile:

By looking at your configuration file, I don’t see anything wrong…

As you can see in the docker run command, we first run the migration and then start Gladys. As the error is not really verbose here, maybe for testing purposes you could try starting the container with just the start gladys command : cross-env NODE_ENV=production node index.js ? (as you already did the migration)? Maybe the error will be more verbose I don’t know

It’s a little hard to debug here without more error logs :confused:

Hi,

I also have this error when trying to run gladys on k8s.
from the replicaset logs I get more informations :

[email protected] start:prod /src/server
npm run db-migrate:prod && cross-env NODE_ENV=production node index.js
[email protected] db-migrate:prod /src/server
cross-env NODE_ENV=production node_modules/.bin/sequelize db:migrate
Sequelize CLI [Node: 12.13.0, CLI: 5.5.1, ORM: 4.44.3]
Loaded configuration file “config/config.js”.
Using environment “production”.
No migrations were executed, database schema was already up to date.
internal/modules/cjs/loader.js:797
throw err;
^
Error: Cannot find module ‘bottleneck/es5’
Require stack:

  • /src/server/services/philips-hue/lib/light/index.js
  • /src/server/services/philips-hue/index.js
  • /src/server/services/index.js
  • /src/server/lib/index.js
  • /src/server/index.js
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:794:15)
    at Function.Module._load (internal/modules/cjs/loader.js:687:27)
    at Module.require (internal/modules/cjs/loader.js:849:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object. (/src/server/services/philips-hue/lib/light/index.js:1:20)
    at Module._compile (internal/modules/cjs/loader.js:956:30)
    at Object.Module._extensions…js (internal/modules/cjs/loader.js:973:10)
    at Module.load (internal/modules/cjs/loader.js:812:32)
    at Function.Module._load (internal/modules/cjs/loader.js:724:14)
    at Module.require (internal/modules/cjs/loader.js:849:19)
    at require (internal/modules/cjs/helpers.js:74:18)
    at Object. (/src/server/services/philips-hue/index.js:2:32)
    at Module._compile (internal/modules/cjs/loader.js:956:30)

I didn’t take time to rebuild the image locally adding that library yet.
hope it helps

Interesting thanks for your feedback! I’m surprised you get this kind of error linked to the code, as it’s a simple Docker image, if it’s working on our side it should work the same on your side in k8s

I know, that’s the point of using docker.
I may have some time later this week to look at it.

1 Like