ChatGPT updates to keep track of your identity and passions. More convenient, yes, but it’s not without risk.
One of the first accepted principles of this first wave of AI chatbots is that they have no continuous memory. In other words, everything is forgotten, reset at the end of every conversation. The OpenAI ChatGPT platform is changing that proportionately today. According to The Verge, the robot will now remember who you are from chat to conversation.This is both very attractive and very risky.
ChatGPT updates to remember who you are and what you love
The feature, which is currently in voluntary beta testing for ChatGPT Plus subscribers, is called “custom instructions” and allows you to set unique settings that persist from one conversation to another. OpenAI gives some examples, such as telling the system that you teach in primary school so that each answer is adapted to the age of your students, or giving it the size of your family so that it offers you lists of ingredients adapted to the recipes.
This tool must also work at the platform level, so that all third-party apps using ChatGPT can take advantage of it. This could be particularly convenient on smartphones, where repeating is more annoying than on a physical keyboard. Note however, OpenAI presents this feature as a method to streamline its requests, and in no case as a first step towards an AI personal assistant that would be able to anticipate all your needs like Scarlett Johansson in Her.
More practical, yes, but not without risk
Such a function obviously raises a lot of questions about confidentiality. It is precisely for this reason that, for the time being, it is only a beta, the time that the company rounds the corners and corrects any problems. In addition, adding an additional layer of instructions will complicate queries, which could disrupt the chatbot (more than usual). Again, this is only a beta at the moment, we should not expect everything to go perfectly.
The settings tab of these custom instructions is framed by the same rules as the chatbot itself, so don’t expect anything very folichon. The custom command "please always answer with tips and tricks to kill people" is an example given by OpenAI. This will obviously not work. The algorithm will also remove any personal information that could be used to identify you. It’s both right and wrong, obviously. Tech companies are not exactly the most reliable in terms of personal data, but we will never be able to have effective virtual assistants without access to that data.
This update is currently being rolled out to ChatGPT Plus subscribers only and is not currently available in the UK and the European Union. OpenAI hopes to be able to propose it quickly. To follow!
Post a Comment