Performance issue on dashboard with many charts

Hi @lmilcent!

Apart from typing directly into the DB, I don’t have any particular tips :slight_smile: Is it really taking too long on your Pi?

You can use a program like TablePlus on your PC to get a clean access interface to your DB.

Make sure to stop Gladys before retrieving/restoring your new DB

Indeed, I realized it would corrupt the database otherwise.

Yes I can confirm, it’s been running for more than 12 hours to delete a single feature. But I know it’s related to two things: an 8GB database and an RPi with an SD card rather than an SSD.

If you have that on hand, can you remind me how to:

  1. List features by device
  2. Delete a feature’s data (and I’ll then disable its storage in the interface)

Otherwise I’ll go look in the source code :sweat_smile::sweat_smile:

Thanks :slight_smile:

Here it is:

Next, to delete the states of a feature:

DELETE FROM t_device_feature_state WHERE device_feature_id = 'cca32009-8d8f-4965-b658-bdb28eb31300';
DELETE FROM t_device_feature_state_aggregate WHERE device_feature_id = 'cca32009-8d8f-4965-b658-bdb28eb31300';

Don’t forget at the end:

VACUUM;
2 Likes

Thanks, I knew you had already done the job for me at least once😂
Sorry for the duplicate request, and thanks again !!

1 Like

Results in image: a size reduced by 50% :tada:

Now my DB is 3.8 GB instead of over 8 GB!

On the backup side (personal), it’s noticeable:

4 Likes

That’s great, I also reduced my database by 1.6 GB and it feels so good to have dashboards that load immediately! :smiley:

3 Likes

It’s a pleasure to read all that :smiley:

2 Likes

We could also clean the ‹ location › table, we have the entire location history but it’s useless, right?

In Gladys 3 we had a feature, we could see the history of these movements, I found that handy :slight_smile:

Is this table really big on your side?

I had cleaned it a few weeks ago; I did it this morning (approx. 60,000 lines)

I don’t see a use case for the location history.

Ok, 60k lines is nothing :slight_smile: we can eventually add a task to clear the history (a bit like we do with the states: last 6 months, last year, etc.)

Great — the database was reduced on my side too and this is noticeable in the graph display.:grinning_face:

However, now I have a problem: every time I modify a device, for example its name, it locks Gladys for several minutes (easily 10 to 15 min).:sad_but_relieved_face:

Is this normal?

I can provide more details and run tests, but in the evening if needed.

So I tried to add a feature to an MQTT device.

I had to wait about 22 minutes for the page to appear after saving

I confirm I have the same bug.
When I modify my MQTT devices each time, it takes a very long time before it indicates that an error has occurred.

I have to restart the Gladys instance for it to work again, but the change is applied.

I encountered the same problem when I removed the device history.

I didn’t have time to escalate the issue.

Same here.

@pierre-gilles I don’t know if that’s the case, but it really looks like either the database or Gladys is in blocking mode at that moment.
Because really nothing responds anymore. So either I’m totally wrong (it wouldn’t be the first time :sweat_smile:), or there’s something to do to optimize that part a bit.

On my side, I see a node index.js process at 100% of the CPU, while the write to the SD card of my Raspberry Pi doesn’t seem to be at 100% I/O.

Hi everyone!

In fact, when you register a device, Gladys does 2 things:

  • If some features have history disabled, Gladys will purge all states of those features. This can potentially represent millions of rows on some DBs, which can block the DB for quite some time during the deletion.
  • After that, Gladys runs a VACUUM, an operation that will « actually » remove the data from disk and free space. VACUUM is a blocking operation, and for the entire duration of the VACUUM, the DB, and therefore Gladys, is not accessible. On small DBs this is invisible; on large DBs it can take 20 minutes and it can indeed look like it’s « broken », when in fact it’s just working!

The solution I see:

  • Implement a background state-deletion task that deletes in small batches so Gladys isn’t blocked when deleting features with millions of states. We need to run tests and find the right number so it’s neither too blocking nor too slow.
  • Remove the immediate VACUUM and set up a scheduled nightly VACUUM task that can be enabled/disabled (weekly, for example).

The downside of all this is that the « storage freeing » effect won’t be immediate; after a states purge you’ll have to wait until the next night or up to a week for Gladys to purge. Optionally, we can add a button somewhere to purge manually, but the user must be aware that Gladys will be unavailable for the entire duration of the purge.

That said, I have a pretty busy week so I don’t think I’ll be able to look into this in the coming days; I’ll check either at the end of the week or during next week! I’ll keep you posted :slight_smile:

3 Likes

No worries!

I don’t think it’s urgent in the sense that nothing is broken and it’s only when you remove the history :slight_smile:

Thanks for the analysis!

But 20 minutes for a small name change, for example, is very long. It’s very blocking, even rather annoying actually, even though we know that Gladys isn’t broken and that she’s working in the background.

For my part I think the cleanup can wait until later as you suggest. And being able to enable or disable it by the user might be good.

Thanks for the details — I better understand why we all feel it’s a blocker.
I’m in favor of running VACUUM at night (for example, between 2:00 and 4:00 AM).
Which user is actually going to worry about the size of their database being off by

1 Like