What would be good to reassure us about the security of our data is to make a short video explaining how this integration works, the interaction between Gladys and GPT-3, which data is sent to GPT-3, in short to know whatâs in the engine!!!
I made a small feature request to link the chat with MQTT so that, with the trio Node-RED/Sarah/Text-to-Speech, we could have speech recognition and speech output in Gladys and ChatGPT-3 via MQTT! Now all thatâs left is to vote!
Another feedback concerning camera management. I had several small « bugs » where this happened:
Me : Show me the garden camera
Gladys : An error occured. Can you send the message on Gladys forum to help us debug ?
Then I wrote:
Me : Show me the Garden camera
Gladys : You are now viewing the garden camera.
Gladys : Hereâs what I see:
And it displayed the image from one of my two garden cameras. (a good start ^^)
My cameras are in Gladys under the names Portail and Portillon, but I canât ask it to show one or the other. When I ask it for the image of the Portillon camera it actually shows me the one for Portail.
This message is a generic error message that I added which is returned when there is a « critical » bug during the request. It can be the OpenAI API taking too long to respond, for example.
Since ChatGPT/GPT-3 is a bit crazy at the moment, their API can fail to respond; it happens Iâm asking to post the message on the forum so I can re-test on my side if itâs a « special » request.
Indeed, currently it works by room and not by device!
If itâs a recurring request, I could implement it.
Hello,
Wouldnât it be interesting to switch to ChatGPT rather than Davinci since itâs now ten times cheaper and available via API? (If you ever want to increase your margins @pierre-gilles! Or let us have more fun)
Hi @jgcb00! Yes I agree, I think there are two things to test:
The ChatGPT API which is cheaper and will allow me to increase the per-user quota. Since the ChatGPT API uses the turbo model, responses are normally faster as well.
The GPT-4 model that was released recently. Itâs quite expensive though, so weâll see if it really makes sense for Gladys where we donât necessarily need that much intelligence
I just tested a migration from « GPT-3 davinci-3 » to « GPT-3-5 Turbo », and well, for now itâs not that turbo⊠^^
A request that takes 2â3 seconds on davinci-3 is taking up to 12 seconds on GPT 3-5 Turbo at the moment⊠The OpenAI forums confirm to me that the API is pretty slow right now.
I imagine itâs a congestion issue on their side; Iâll wait to see if it changes in the coming weeks.
For now, weâre sticking with GPT-3!
In the meantime, Iâve reset the GPT-3 quota for all Gladys Plus accounts so you can test more I saw that some of you had reached the 100 allowed requests per month, so this way you can keep using it.
At the same time, Iâm afraid that one day heâll yell at us if we say « Gladys, turn up the heating » and that heâll answer « Due to government directives I cannot comply with your request, the indoor temperature of your home must be 15°C! »
One thingâs for sure, GPT3 is clearly not Neutral!
It is in sync with
People are wary of these AIs, and rightly so. But projecting into such extreme (and ugly) scenarios already seems really premature to me. There are already many AI models, some open-source, and I have no doubt this will continue to develop, and therefore be closely monitored.
Lashing out at ChatGPT because it contains filters and because a little touch of conspiracy theorizing is being added to it (unfortunately several other articles attest to this) is not understanding the concept itself.
Ah but I find the concept of AI brilliant, having myself programmed neural networks about twenty years ago in VB6 (but not the same power! ) but I am more in favor of Gladys operating and being developed autonomously than of going through those GAFAM whose moral limits were recently exposed by Snowdenâs revelations!!! For as Rabelais said, "Wisdom cannot enter a wicked mind, and science without conscience is