Google, OpenAI and Microsoft team up to regulate AI, but Apple does it alone

A consortium formed by OpenAI, Google, Microsoft and Anthropic is finally launched to promote AI security, share best practices and support applications beneficial to society. Apple has not yet joined the initiative.

Google, OpenAI and Microsoft


A consortium consisting of OpenAI (ChatGPT), Google (Google Bard), Microsoft (Bing Chat) and artificial intelligence security firm, Anthropic, announced that these companies will collaborate to establish best practices for the AI industry. Apple is staying out of the initiative for now.

«We are announcing today the creation of The Frontier Model Forum bringing together Anthropic, Google, Microsoft and OpenAI» says Google in a blog article. “This new industrial body will focus on ensuring safe and responsible development of AI models,” says the company, while its big boss, Sundar Pichai, believes that AI requires the same regulation as nuclear weapons.

The AI Consortium: OpenAI, Google and Microsoft team up, but Apple rejects the invitation

Companies creating AI technologies have a responsibility to ensure “that they are safe, secure and remain under human control,” says Brad Smith, Microsoft’s vice-president and president, in the announcement.

Google said in a statement that "this initiative is an essential step in uniting the technology sector and promoting responsible AI, addressing the challenges so that it benefits all of humanity."

This consortium will use the technical and operational expertise of its members to benefit the entire AI ecosystem, the statement continues. Tech behemoths want to create a public library of solutions to support industry norms and best practices. But Apple stayed out of the initiative.

None of the companies involved commented on whether Apple was invited to participate or not, but the project aims to apply to the entire industry. In 2016, Google and Microsoft were already founding members of a similar organization called The Partnership on AI, to which Apple subscribed and continues to do so today.

The fundamental objectives of the consortium are:

  •  Advance AI security research to promote responsible model development, minimize risks and enable independent, standardized assessments of capabilities and security.
  • Identify best practices for responsible model development and deployment to help the public understand the nature, capabilities, limitations and impact of technology.
  • Collaborate with policy makers, academics, civil society and business to share knowledge on trust and security risks.
  • Support efforts to develop applications that can help address society’s greatest challenges, such as climate change mitigation and adaptation, early detection and cancer prevention, and the fight against cyber threats.

Post a Comment

Previous Post Next Post