With a little distance you could almost see the license plate; he should be able to read it when he comes back.
The problem is the timing.
Because we’re talking about a static capture of the video stream…
So when I arrive home, I open the gate > trigger the scene.
But I don’t know exactly when Gladys will hand the image to ChatGPT for analysis…
A little too early and we won’t even see the car / a little too late and the license plate will be too low in the frame. ![]()
nothing prevents you from taking a series of 3 or 4 images at 0.5 s intervals or more depending on what seems appropriate to you
then ask it whether it sees the license plate
and if it sees the
You put a fluorescent strip on your dashboard visible to the camera.
I never thought I’d get into tuning
Indeed, it’s a solution… But it’s a bit of a hack, I think.
I’ll test it anyway and I’ll get back to tell you! ![]()
Hello everyone,
The ChatGPT integration into scenes works like a charm, thank you very much! ![]()
On the other hand, does anyone have an example of a prompt that makes ChatGPT not automatically reply?
Example: if I monitor my camera in front of my house every minute and I ask it: « Warn me if you see a person in the yard in front of the gate, otherwise there’s no need to notify me. »
Well, I still systematically get a response along the lines of: « I don’t see anyone in the yard in front of the gate. » (with some variations in each message, but the gist remains the same).
That said it works well because when there actually is someone: « There is someone in the yard in front of the gate. » ![]()
Have you tried using only the beginning of the prompt?
« Let me know if you see someone in the courtyard in front of the gate »
Yes, and even with nuances like « Only if », « If ever »,… same result: he’s chatty! ![]()
Maybe an idea, but I haven’t tested it: you write in the prompt: "is there someone in front of the gate? Answer me only with 'yes
Great ![]()
That’s not possible right now, it always gives a response! But it could be added
I’ll look into it
Thanks everyone, before trying the analysis again I’ll first limit the number of analyses. I’ll try to capture the ONVIF events from my camera in Gladys. ![]()
Hello.
Indeed the problem is being approached, IMHO, the wrong way: you need to trigger on an ONVIF event.
Personally, I use Onvifeyes and Hooks that set MQTT values which Gladys then interprets… In the absence of event handling on Gladys’ side ![]()
Have a good Friday everyone,
Jean
I’ve just added an option so that ChatGPT doesn’t reply if there’s no need to reply ![]()
This will be included in the next version of Gladys:
Gladys Assistant has been added to Selfh! ![]()
Yes, he told me he’d add it
So great!
So cool! Curious to know if you’re going to see any feedback on this post!! ![]()
Agreed! ![]()
So, I completely rethought everything and went with a Frigate/MQTT solution to be compatible with other future cameras that might not be ONVIF-compatible.
The only downside is that my Raspberry Pi has to do the person detection, whereas the camera does it natively.
How fast!
thanks ![]()
A question following up on the discussions of @guim31 and @Prof_Techno: when an image is attached to a request to the AI. Is it a capture taken when the scene is triggered or is it an image taken with the latency specified in the « camera » integration?