Adobe is creating an AI-powered text-to-video model- Know what it is all about

[ad_1]

Adobe is reportedly in the process of developing an AI-powered text-to-video generation model, with plans to enhance its capabilities through the acquisition of video content from photographers and artists. The company intends to utilise this content, along with its existing library of stock images and videos, to train its AI model effectively. According to sources cited by Bloomberg, Adobe is offering an average compensation of $2.62 (approximately Rs. 220) per minute of submitted video content.
 

What is Adobe planning? 
 

Adobe has reached out to its network of photographers and artists, proposing payments of up to $120 (around Rs. 10,000) for the submission of videos. These videos are requested to feature individuals engaging in everyday activities and expressing a range of emotions, including joy and anger. It appears that Adobe aims to utilise this data to train its AI model to recognize and replicate human expressions and natural movements.

Furthermore, Adobe has requested specific types of video content, such as short clips depicting people expressing emotions and shots focusing on various aspects of human anatomy, such as hands, feet, and eyes. Additionally, the company has sought videos showing individuals interacting with objects like smartphones and gym equipment.

Documents obtained by Bloomberg emphasize the importance of adhering to guidelines regarding copyrighted material, nudity, and offensive content when submitting videos. While the average compensation offered is $2.62 per minute, some contributors may receive up to $7.25 (approximately Rs. 600) for their submitted content.
 

The rise of AI models
 

This development sheds light on the growing trend of companies investing in data procurement to train AI models, as publicly available data sources become increasingly exhausted. While some companies opt for ethical data acquisition methods, others have faced criticism for allegedly using copyrighted material sourced from social media platforms. For instance, a recent report claimed that OpenAI utilised over a million hours of transcribed data from YouTube videos to train its GPT-4 model.

 

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *