Instance Gladys KO

Hello,

My Gladys instance was down this morning.
The Gladys container is running but does not respond on port 80.
All my other containers on my mini-PC are working (node-red, z2m…)
I have some errors in the gladys logs that appear quite frequently: sudo docker logs Gladys

2024-04-09T04:50:48+0200 index.js:16 (process.) TypeError: Invalid value « undefined » for header « authorization »
at ClientRequest.setHeader (node:_http_outgoing:651:3

code: ‹ ERR_HTTP_INVALID_HEADER_VALUE ›

I have the impression this is since an update triggered by watchtower
sudo docker ps :
image

« 23 hours ago » indeed corresponds to the moment when I stopped receiving Gladys Telegram notifications.

What have I done recently on Gladys?
→ I configured a scene with a ‹ Wait › action of 24h (I doubt it comes from that)

Of course I tried restarting the gladys container as well as rebooting the entire machine but the problem persists. Gladys is not accessible from other devices.
I can share more logs if needed.

Do you have any other debugging leads? Am I the only one experiencing this behavior since an update?

Thanks

Hi @qleg!

Do you have the full log?

Monday’s upgrade was really minor, on the backend almost nothing changed… So I’d be surprised if it was the upgrade itself.

A few routine checks:

  • Is there enough disk space left on your mini-PC? df -h to check in the CLI :slight_smile:
  • Is the mini-PC not under heavy load in some way? htop is handy to check that

Hello,

Yes, I checked the disk space: no problem, there’s still plenty of room.
And htop is fine too; my machine isn’t maxing out its resources.

I’ll send the full log as soon as possible.

Thanks

Here is the full log :

2024-04-09T04:50:48+0200 index.js:15 (process.) unhandledRejection catched: Promise {
TypeError: Invalid value « undefined » for header « authorization »
at ClientRequest.setHeader (node:_http_outgoing:651:3)
at new ClientRequest (node:_http_client:291:14)
at Object.request (node:https:366:10)
at RedirectableRequest._performRequest (/src/server/node_modules/follow-redirects/index.js:284:24)
at new RedirectableRequest (/src/server/node_modules/follow-redirects/index.js:66:8)
at Object.request (/src/server/node_modules/follow-redirects/index.js:523:14)
at dispatchHttpRequest (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/adapters/http.js:202:25)
at new Promise ()
at httpAdapter (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/adapters/http.js:46:10)
at dispatchRequest (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/core/dispatchRequest.js:53:10)
at Axios.request (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/core/Axios.js:108:15)
at Axios. [as get] (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/core/Axios.js:129:17)
at Function.wrap [as get] (/src/server/node_modules/@gladysassistant/gladys-gateway-js/node_modules/axios/lib/helpers/bind.js:9:15)
at Object.get (/src/server/node_modules/@gladysassistant/gladys-gateway-js/lib/request.js:86:6)
at GladysGatewayJs.enedisGetDailyConsumption (/src/server/node_modules/@gladysassistant/gladys-gateway-js/index.js:1177:23)
at Gateway.enedisGetDailyConsumption (/src/server/lib/gateway/enedis/gateway.enedisGetDailyConsumption.js:17:56)
at recursiveBatchCall (/src/server/services/enedis/lib/enedis.sync.js:24:37)
at /src/server/services/enedis/lib/enedis.sync.js:92:32
at tryCatcher (/src/server/services/enedis/node_modules/bluebird/js/release/util.js:16:23)
at Object.gotValue (/src/server/services/enedis/node_modules/bluebird/js/release/reduce.js:166:18)
at Object.gotAccum (/src/server/services/enedis/node_modules/bluebird/js/release/reduce.js:155:25)
at Object.tryCatcher (/src/server/services/enedis/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/src/server/services/enedis/node_modules/bluebird/js/release/promise.js:547:31)
at Promise._settlePromise (/src/server/services/enedis/node_modules/bluebird/js/release/promise.js:604:18)
at Promise._settlePromiseCtx (/src/server/services/enedis/node_modules/bluebird/js/release/promise.js:641:10)
at _drainQueueStep (/src/server/services/enedis/node_modules/bluebird/js/release/async.js:97:12)
at _drainQueue (/src/server/services/enedis/node_modules/bluebird/js/release/async.js:86:9)
at Async._drainQueues (/src/server/services/enedis/node_modules/bluebird/js/release/async.js:102:5)
at Immediate.Async.drainQueues (/src/server/services/enedis/node_modules/bluebird/js/release/async.js:15:14)
at processImmediate (node:internal/timers:476:21) {
code: ‹ ERR_HTTP_INVALID_HEADER_VALUE ›
}

I don’t know if it’s related to my crash but I’m trying to find some information

On restarting the Gladys container, these logs catch my attention:

2024-04-10T20:40:42+0200 index.js:15 (process.) unhandledRejection catched: Promise {
TypeError: Cannot read properties of undefined (reading ‹ substr ›)
at /src/server/lib/scene/scene.addScene.js:45:47
at Array.forEach ()
at SceneManager.addScene (/src/server/lib/scene/scene.addScene.js:36:20)
at /src/server/lib/scene/scene.init.js:22:10
at Array.map ()
at SceneManager.init (/src/server/lib/scene/scene.init.js:20:30)
at Object.start (/src/server/lib/index.js:150:9)
at /src/server/index.js:51:3
}
2024-04-10T20:40:42+0200 index.js:16 (process.) TypeError: Cannot read properties of undefined (reading ‹ substr ›)
at /src/server/lib/scene/scene.addScene.js:45:47
at Array.forEach ()
at SceneManager.addScene (/src/server/lib/scene/scene.addScene.js:36:20)
at /src/server/lib/scene/scene.init.js:22:10
at Array.map ()
at SceneManager.init (/src/server/lib/scene/scene.init.js:20:30)
at Object.start (/src/server/lib/index.js:150:9)
at /src/server/index.js:51:3

I doubt this is related to a scene I recently created with a 24h « Wait » action; we’d have more logs than this, wouldn’t we?

Ah indeed the second log is much more interesting! That’s what’s causing the problem!

Here’s the line that’s crashing:

You must have created a scene that triggers every month, but you didn’t set the hour and it’s causing Gladys to crash.

While the bug is fixed and pushed to Gladys, can you connect to the DB, find the scene in question, and delete it?

ok thanks for the info, I’ll see what I can do

1 Like

Indeed, it was caused by a scene scheduled monthly without a time (my oversight).
I deleted the scene from the database and my instance is working correctly, thanks :slight_smile:

Some info on the bug for a fix:

  • We do get an error message when saving with a monthly trigger without the time: « An error occurred while saving your scene. Please check that all actions / triggers are filled in and correct. » but it still saves the scene and you can leave the scene edit page without any problem.

  • The instance crash occurs when the scene runs in scheduled mode

  • We could set a default time if not provided (for example 00:00)

  • Or block exiting the scene edit if there’s an error, but that seems a bit more complex to implement

3 Likes

Possible solution : be able to save scenes that are in error, but prevent them from being activated.
=> Error message when saving and on each

2 Likes

Thanks for the investigation, can you create a GitHub issue to keep track of the bug? :slight_smile:

1 Like

Yes, I’ll create a GitHub issue: Crash instance at reboot when a scene is configured every-month without hours · Issue #2052 · GladysAssistant/Gladys (github.com)

Fix:

1 Like

Hi @qleg, I worked on this since it’s really ultra-critical!

I opened a PR:

Specifically:

  • Default values will be selected for the ‹ scheduled trigger ›
  • A failing scene will no longer crash Gladys on startup
1 Like