The New York Times takes drastic action against artificial intelligence

The New York Times is fed up with the artificial intelligences that plunder its content. To protect itself from the robots that roam the web to feed the great models of languages, the famous American newspaper has updated its conditions of use.

new york times


The New York Times uses Google’s AI tool that could write this article, at the same time it prevents artificial intelligences from using its content to train. The highly regarded American newspaper modified its conditions of use on August 3 to forbid the usage of its material, as discovered by Adweek.

Tech companies will no longer be able to retrieve text, photos, video or audio in the development of “any software, including, but not limited to, the training of a machine learning or artificial intelligence (AI) system”. The media could have stopped at the first sentence, but prefers to explicitly designate AI.

The New York Times desires to safeguard its content from bots.

The updated terms also specify that website crawlers, designed to use such content, may not be used without the written permission of the publication. Without the daily license, news aggregators like Yahoo News or Feedly can no longer access its material.

According to The New York Times, there may be fines or other "penalties" if these new regulations are not followed. It is assumed that he will then sue. However, the famous newspaper has not changed its robots.txt file, which tells search engine crawlers which URLs can be accessed. A strategy for luring tech corporations into a lucrative lawsuit?

The New York Times also exploits artificial intelligence

As mentioned at the beginning of the article, the New York Times explores the possibilities offered by AI, while prohibiting its content. Thus, the famous American daily plays on both tables to always come out winner. A strategy of the stowaway that could pay off, while the profession of journalist is high in the ranking of the most threatened jobs by ChatGPT.

Recently, Google revealed it was collecting public data on the web to train its AI, such as the Bard chatbot. Similarly, many large language models feeding services such as ChatGPT are formed on large datasets. These include copyrighted materials retrieved from the web without the permission of the original creator. Perhaps that is why the New York Times is taking retaliatory action.

Post a Comment

Previous Post Next Post