5 things about AI you may have missed today: 1st Hindi LLM OpenHathi, French Minister lauds India’s AI role, more

[ad_1]

Today, December 13, marked another interesting day in the artificial intelligence (AI) space, especially in the Indian context. Indian startup Sarvam AI has launched the first Hindi language large language model (LLM) OpenHathi-Hi-v0.1. In other news, at the Global Partnership on Artificial Intelligence (GPAI) Summit, French Minister Delegate for Digital Affairs, Jean-Noel Barrot emphasized that India is playing a leading role in AI. This and more in today’s AI roundup. Let us take a closer look.

OpenHathi LLM launched

Sarvam AI, an Indian AI startup, has introduced OpenHathi-Hi-v0.1, the inaugural Hindi large language model (LLM) in the OpenHathi series, reported Moneycontrols. It is the first ever Hindi language LLM to be built. Developed on Meta AI’s Llama2-7B architecture, the model claims to match GPT-3.5’s performance for Indic languages. The AI model utilizes a 48,000-token extension of Llama2-7B’s tokenizer and undergoes a two-phase training process, involving embedding alignment in the first phase and bilingual language modeling in the second phase.

French Minister speaks about India’s role in AI

French Minister Delegate for Digital Affairs, Jean-Noel Barrot, highlighted India’s prominent role in Artificial Intelligence (AI) as the host of the Global Partnership on Artificial Intelligence (GPAI) Summit, reported ANI. During his second official visit to India, Barrot expressed the intent to strengthen bilateral cooperation on digital technology. He emphasized the ongoing collaboration between India and France, stating that both nations share common values and are committed to advancing their relationship and the global agenda. Additionally, Barrot is set to participate in the Ministerial Council of the GPAI Summit in Delhi.

“India plays a leading role in AI by hosting this summit by sharing the global partnership on AI next year. A lot has been achieved during the Indian G20 presidency and so we’re going to keep building our relationships, and our global agenda together because we are very much aligned and share the same values,” ANI quoted him as saying.

Japan Minister calls for safe, secure, and resilient AI

Japan’s Vice Minister for Policy Coordination, Hiroshi Yoshida, has urged for the establishment of a safe, secure, and resilient AI ecosystem, as per a report by ANI. During discussions on Wednesday, he specifically sought India’s active participation in achieving this goal. Emphasizing the significance of responsible AI, Yoshida highlighted Prime Minister Narendra Modi’s keen interest in AI policy and encouraged India to actively engage in shaping a responsible and secure AI environment.

“AI is a very important issue all over the world and it is changing our lives and society, and we need to work on AI risk mitigation…GPAI is very important and we also discussed in G7 and made a guiding principle for all AI actors regarding AI through the Hiroshima AI process,” Yoshida said while speaking to ANI.

Meta ignored legal warnings, used pirated books to train AI, lawsuit alleges

According to a report by Reuters, Meta Platforms, the parent company of Facebook and Instagram, received legal warnings about the potential legal risks of using thousands of pirated books to train its AI models. However, according to a recent filing in a copyright infringement lawsuit, the company proceeded with this practice. The lawsuit, initially brought by comedian Sarah Silverman, Pulitzer Prize winner Michael Chabon, and other prominent authors, alleges that Meta used their works without permission to train its AI language model, Llama. A California judge recently dismissed part of the Silverman lawsuit but indicated that the authors would be allowed to amend their claims.

NYT hires newsroom AI director

The New York Times has appointed Quartz co-founder Zach Seward as the editorial director of AI initiatives, reflecting a growing trend in media organizations exploring the use of artificial intelligence in newsrooms, reported Axios. In this new role, Seward will collaborate with newsroom leadership to define principles for the responsible use of generative AI, addressing ethical considerations to maintain public trust, as the industry continues to experiment with AI technology.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *