Hi
I’m creating this post because I strongly feel, without being able to prove it, that I’m causing slowdowns in Gladys’ thinking. I would therefore like to know if this is normal and if there is a way to optimize all this.
My configuration is as follows:
env. 100 deviceType, almost all binary (type rflink for those that are often modified)
env. 30 scenarios, including 22 that have devicetype-new-value as a trigger.
20 modules
30 scripts
Full raspbian image on RPI3B+, manual installation of gladys without any issues
(vnc + ftp access in the background but they consume nothing).
I mainly use lighting scenarios that manage my 10 light points in the apartment, which are ALL modified with each activation of a scenario (with a 300ms delay between each because it’s RFLINK, and redundancy so 20 modifications in total), which has the effect of triggering the devicetype-new-value event 20 times and therefore 20*22 checks of the scenario conditions and as many lines of logs
When I launch a lighting scenario, it sometimes takes several seconds between the command and the execution of the order, as if Gladys was overloaded.
Is my configuration limiting for the performance of the Raspberry Pi? Can an optimization be easily made?
I also notice a lot of disk access on the RPI card (with the LED) when launching commands. Writing the lines in the logs or reading the database could be the cause and therefore cause this slowdown?
EDIT: I just checked, I have more than 230Mo of Gladys logs
Indeed, it’s the same issue!
The biggest freezes occur during a « checkUserPresence » or other similar checks.
CPU usage doesn’t go up as much as indicated, but my RPI is very well cooled, I’m at a maximum of 42° so it runs faster than another one at 80° and I still have 200% usage by mysql at peak.
Regarding the indexes, have you done the manipulation? I understand the general idea but I don’t know how to implement it (maybe pierre-gilles did it since 2017?).
I did some cleaning (160,000 fewer entries…) and ran pti_nico’s response time tests.
Where he had 0.10 seconds and found it slow, I have almost 1 second!
Summary
mysql> SELECT user.id, MAX(event.datetime) as datetime
FROM user JOIN event ON event.user = user.id JOIN eventtype ON event.eventtype = eventtype.id
WHERE ( eventtype.code = ‹ back-at-home › OR eventtype.code = ‹ user-seen-at-home › )
AND event.house = 1
AND user.id IN (
SELECT user.id
FROM user
WHERE (
SELECT eventtype.code
FROM event
JOIN eventtype ON event.eventtype = eventtype.id
WHERE ( eventtype.code = ‹ back-at-home › OR eventtype.code = ‹ left-home › )
AND event.user = user.id
AND event.house = 1
ORDER BY datetime DESC
LIMIT 1
) = ‹ back-at-home ›
)
GROUP BY user.id
HAVING datetime < DATE_SUB(NOW(), INTERVAL 10 MINUTE);
Empty set (0.88 sec)
With the indexes on the datetime and eventtype fields, I go down to 0.40 sec then 0.03 sec!
Are indexes often to be redone or is it automatic?