OpenAI’s Director of Trust and Security is leaving the company. A decision that comes in a rather complicated context.
OpenAI’s Director of Trust and Security, Dave Willner, has left the company, as announced in a LinkedIn post. However, it retains an “advisory role” but has asked its subscribers to pass on any similar opportunities. Dave Willner explains that he made this decision so that he could spend more time with his family. Before giving the “real” reasons.
OpenAI’s Director of Trust and Security is leaving the company
“In the months following the launch of ChatGPT, it was becoming increasingly difficult for me to manage everything,” he writes. “OpenAI is in a very intense phase of its development – just like our children. Anyone with young children and very intense work has certainly already experienced this tension.”
He goes on to say that he is “proud of everything” the company has accomplished while in this position and that it has been “one of the cool and most interesting jobs” in the world.
Of course, this transition also occurs as OpenAI faces several legal hurdles, as does its flagship product, ChatGPT. The FTC recently opened an investigation into the company, fearing it would violate consumer protection laws and use “unfair or misleading” practices that could undermine the confidentiality and safety of the general public. The investigation targets a bug that had seen private data leak, which, of course, normally falls under the supervision of the trust and security division.
A decision that comes in a rather complicated context
It was actually "a rather easy choice to make," according to Dave Willner, "but not the kind of choice that someone in [his] position often makes explicitly in public." He also expressed hope that this decision will help to have more open discussions in the future about work-life balance.
There have been many fears about the security of artificial intelligence in recent months, and OpenAI is one of the companies that has committed to implement safeguards on their products on the orders of President Biden and the White House. Among these, allow independent experts to access their code, assess societal risks such as biases, share security information with the government and include watermarks on audio and visual content to indicate to the general public that this is AI-generated content.
Post a Comment