Gladys Assistant v4.25.1: Tasmota and bug fixes!

Hello everyone!

A quick post to introduce Gladys Assistant v4.25.1, released last Friday and which should already be deployed on your instances :sunglasses:

Tasmota improvements around consumption tracking

There were several unit issues with the Tasmota integration on the consumption tracking features, and this is now fixed in Gladys:

Thanks @Terdious for the PR, and @GBoulvin for the testing :pray:

Fix for a camera-related bug (ffmpeg)

For some users, when their camera didn’t respond well to Gladys requests, the ffmpeg processes remained active and accumulated on the user’s system, consuming RAM unnecessarily.

This update limits requests to fetch a camera image to 10 seconds.

Beyond that, the process is terminated to avoid saturating RAM.

Thanks to @lmilcent for reporting the bug :pray:

Fix for a bug in the scene editor interface

The « Execute only when the threshold is passed » button was no longer clickable.

This is now fixed.

Fix for a bug in the new « Devices » widget

The new « Devices » widget was introduced in the latest Gladys release, and there was a bug with roller shutters that could not be controlled.

This is now fixed in this version.

Improvement to m² units

All units with an exponent now show the exponent and not just a « 2 »

The full changelog is available here .

How to update?

If you installed Gladys with the official Raspberry Pi OS image, your instances will update automatically in the coming hours. This can take up to 24 hours — don’t worry.

If you installed Gladys with Docker, make sure you are using Watchtower (See the documentation )

6 Likes

Awesome, thanks everyone!!

1 Like

Thanks for this new summer release :slight_smile:

I’ll be able to test the fix with an image refresh every second :angry_face_with_horns:

1 Like

hello,
small question: since upgrading to 4.25 I’ve noticed that my Gladys container restarts consistently every 8-9 hours.
Has anyone else noticed the same?
Gladys? Docker?

Not for me apparently.

Thanks for this very useful update :+1:

1 Like

Not for me either

Thank you for this work

1 Like

For me too, the container was created 2 days ago but restarts every 8–9 hours.

CONTAINER ID   IMAGE                           COMMAND                  CREATED        STATUS                  PORTS                                                                                            NAMES
9423a0d2c8ae   koenkk/zigbee2mqtt:latest       "docker-entrypoint.s…"   33 hours ago   Up 30 hours                                                                                                              gladys-z2m-zigbee2mqtt
213df88766f2   gladysassistant/gladys:v4       "docker-entrypoint.s…"   2 days ago     Up 2 hours                                                                                                               gladys
382d1aca2515   eclipse-mosquitto:2             "/docker-entrypoint.…"   2 weeks ago    Up 30 hours                                                                                                              gladys-z2m-mqtt
7b0f1fffd08d   eclipse-mosquitto:2             "/docker-entrypoint.…"   2 weeks ago    Up 30 hours                                                                                                              eclipse-mosquitto
3022ae50d1f0   nodered/node-red:latest         "./entrypoint.sh"        4 weeks ago    Up 30 hours (healthy)                                                                                                    node_red
a82e3156b2b2   portainer/portainer-ce:latest   "/portainer"             5 weeks ago    Up 30 hours             0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9443->9443/tcp, :::9443->9443/tcp, 9000/tcp   portainer
6bc764d42070   containrrr/watchtower           "/watchtower --clean…"   3 months ago   Up 30 hours             8080/tcp
       watchtower

Hi, think about adding logs, otherwise it’s hard to guess what’s happening on your machine :slightly_smiling_face:

hello,
I mainly thought it was a problem with Docker.
But good catch on your part — I checked the logs and I think I found the anomaly!
Apparently it’s the Nextcloud service.
I stopped the service (which I wasn’t using) and since then no more restarts every 8 hours.
I have the impression that it was causing excessive RAM usage. In fact, on the Gladys server I had Firefox and MQTT Explorer crashing after a certain time and they wouldn’t restart, and since I stopped the Nextcloud service in Gladys there are no more problems.
Log excerpt attached if that helps. I haven’t started Gladys development yet (I installed an env and intend to, but for now I’m working on Delphi/Pascal, SQL and Python development).

2023-07-03T23:59:02+0200 <info> bot.poll.js:36 (MessageHandler.poll) Fail to request new Nextcloud Talk messages, retry
2023-07-03T23:59:02+0200 <warn> bot.poll.js:37 (MessageHandler.poll) TypeError: Invalid URL
    at new NodeError (node:internal/errors:399:5)
    at new URL (node:internal/url:560:13)
    at dispatchHttpRequest (/src/server/services/nextcloud-talk/node_modules/axios/lib/adapters/http.js:176:20)
    at new Promise (<anonymous>)
    at http (/src/server/services/nextcloud-talk/node_modules/axios/lib/adapters/http.js:112:10)
    at Axios.dispatchRequest (/src/server/services/nextcloud-talk/node_modules/axios/lib/core/dispatchRequest.js:51:10)
    at Axios.request (/src/server/services/nextcloud-talk/node_modules/axios/lib/core/Axios.js:142:33)
    at Function.wrap [as request] (/src/server/services/nextcloud-talk/node_modules/axios/lib/helpers/bind.js:5:15)
    at MessageHandler.poll (/src/server/services/nextcloud-talk/lib/bot/bot.poll.js:34:31)
    at Timeout._onTimeout (/src/server/services/nextcloud-talk/lib/bot/bot.poll.js:52:27)
    at listOnTimeout (node:internal/timers:569:17)
    at processTimers (node:internal/timers:512:7) {
  input: '/ocs/v2.php/apps/spreed/api/v1/chat/?timeout=15\u0026lookIntoFuture=0\u0026includeLastKnown=1',
  code: 'ERR_INVALID_URL'
}
2023-07-03T23:59:03+0200 <info> bot.poll.js:36 (MessageHandler.poll) Fail to request new Nextcloud Talk messages, retry
2023-07-03T23:59:03+0200 <warn> bot.poll.js:37 (MessageHandler.poll) TypeError: Invalid URL
    at new NodeError (node:internal/errors:399:5)
    at new URL (node:internal/url:560:13)
    at dispatchHttpRequest (/src/server/services/nextcloud-talk/node_modules/axios/lib/adapters/http.js:176:20)
    at new Promise (<anonymous>)
    at http (/src/server/services/nextcloud-talk/node_modules/axios/lib/adapters/http.js:112:10)
    at Axios.dispatchRequest (/src/server/services/nextcloud-talk/node_modules/axios/lib/core/dispatchRequest.js:51:10)
    at Axios.request (/src/server/services/nextcloud-talk/node_modules/axios/lib/core/Axios.js:142:33)
    at Function.wrap [as request] (/src/server/services/nextcloud-talk/node_modules/axios/lib/helpers/bind.js:5:15)
    at MessageHandler.poll (/src/server/services/nextcloud-talk/lib/bot/bot.poll.js:34:31)
    at Timeout._onTimeout (/src/server/services/nextcloud-talk/lib/bot/bot.poll.js:52:27)
    at listOnTimeout (node:internal/timers:569:17)
    at processTimers (node:internal/timers:512:7) {
  input: '/ocs/v2.php/apps/spreed/api/v1/chat/?timeout=15\u0026lookIntoFuture=0\u0026includeLastKnown=1',
  code: 'ERR_INVALID_URL'
}

After the last message from @Einstein8854, I went to Settings > Services to turn off nextcloud-talk (which I don’t use) and since I hadn’t been there for a long time the list has gotten quite long!

Some items raise questions:

  • example: ???
  • Z-Wave: in error

I know that Z-Wave is obsolete but is it normal that it’s showing an error? Does this affect Gladys’ performance?

Can I disable example? Does it serve any other purpose?

By the way, I notice something odd: the service names are not displayed the same way depending on whether I’m on mobile or on the computer (both via Gladys+)


Honestly, it doesn’t change anything at all :smiley:

Example is a sample service mainly intended for developers to show an example. It’s not worth paying attention to, there’s nothing running in it :slight_smile:

Don’t worry about Z-Wave, there isn’t a single line of Z-Wave code left in Gladys :slight_smile: It must be a leftover. It has no impact on Gladys.

Strange, it must be a translations loading issue. Can you create a GitHub issue? I can’t guarantee I’ll look at it in the short term, it’s really very minor ^^

It seemed to me that Gladys shouldn’t go down if a service fails; do you have any idea why a service such as Nextcloud (not configured, moreover) would cause Gladys to crash?

When I went to disable the service, I noticed that the Bluetooth service on my system was in an error state as well:

I can’t find any specific logs.
![1000010379|346x500](upload://d9uEru3zh

Can you create a separate topic with all the information? (crash logs in particular)

Was your message intended for @Einstein8854, I suppose? :smiley:

Because I don’t have any info (logs, date or anything) about the Bluetooth :-/

No, it was for you

Well, that’s what happened to @Einstein8854 — in my case I just disabled the service just in case ^^'.

But my Bluetooth service is showing an error, yet there was no crash of gladys

Thanks @Einstein8854 for reporting the Nextcloud Talk issue, for those who have it can any of you access its database and check the value of the NEXTCLOUD_TALK_TOKEN variables? Normally, if the poll is running it’s because you’ve already tried to configure the service and there’s a value
I’ll try to fix the problem quickly

1 Like

@bertrandda Normally a crash when starting a service shouldn’t cause Gladys to crash — do you see what might have slipped through? :slight_smile: