The ownership of content in the age of artificial intelligence

[ad_1]

As artificial intelligence (AI) begins to touch our lives in every aspect, who owns the digital content we generate on a daily basis? The answer is complex. Today’s consumers find themselves in a world where their personal data is constantly being collected, analyzed, and utilized to boost the services we use. In fact, now it’s also being used to create further content? Who truly owns all this digital content?

According to a report by Chiratae Ventures, the consumer tech industry will touch US$300 billion by 2027, and more than 500 million Indians are currently seeking entertainment and gaming services on the internet on a daily basis. According to Forbes, active social media penetration in India is 33.4%, and generally, in January 2023, 67.5% of all internet users in India, whatever their age, used at least one social networking platform.

How much personal data does that generate? Why should we care?

To get an idea, consider the case of Neha, a modern-day Indian woman who hobnobs with technology in her daily life. An avid social media user, she depends on many AI-driven platforms to curate content that she prefers. From personalized news feeds to algorithmically suggested music playlists, her digital experiences are being moulded by the AI systems that constantly work with her.

First, consider Neha’s photos, which Neha shares on social media. These photos are a vital part of the world of AI-driven content ownership. Facebook and Instagram are notorious for their data collection practices. When users like Neha upload photos and interact with content on these platforms, AI algorithms analyze this data and follow their digital behaviour to understand their preferences better.

With the help of all this analysis, big companies then target advertisements and personalize the user experience for the consumer. What happens here is that even though the users are the ones providing the content, the platforms retain ownership of the data generated, leading to concerns about privacy and data exploitation.

“While platforms and algorithms don’t own creators’ content, strictly speaking, they have a lot of power over who gets to see it, sometimes to the detriment of honest and hardworking creators,” says Stuart Meczes, Creative Director of Contnt.io, a subscription-based platform for creators.

Who Owns the Data?

Who owns the content that users generate on social media, which we constantly upload? Is it the users, or do the platforms and AI algorithms that process that data have a stake in the ownership?

On most social media and content platforms, content is owned by the creators themselves.

“However, platforms have a lot of power — which they’re often silently exercising in the background — to control who sees that content. With the rise of content moderation AI algorithms, the decision likely isn’t being made by a human these days. These algorithms aren’t perfect, and can sometimes end up spotlighting harmful content, or wrongfully deplatforming individually,” says Meczes.

He recounts a bitter personal experience. A self-published author, when online sales of his first book started picking up, he says the algorithm on the platform incorrectly detected that he was paying for reviews, and deplatformed him.

“It took a lot of back and forthing with the platform owners to have my profile reactivated, but by that time, the damage had already been done, and I’d lost a lot of momentum in book sales — and my livelihood took a hit as a result,” he recalls.

Consequences?

How can our data collected by these big companies be misused? Remember the 2018 Cambridge Analytica scandal, which social media giant Facebook is still chafing from? The scandal revealed how personal data from millions of Facebook users was harvested without consent for political advertising.

The incident was called egregious and led to calls for data privacy regulations, such as exist today in the form of European Union’s General Data Protection Regulation (GDPR). In fact, this December, European Union lawmakers reached a deal over the AI Act to mitigate harm in areas where using AI poses the biggest risk to fundamental rights, such as health care, education, border surveillance, and public services, as well as banning uses that pose an “unacceptable risk.”

Such regulations aim to empower users like Neha with more control over their personal data and ensure transparency from tech companies regarding data usage and ownership.

AI-Generated Content & Ownership

Then there is AI-generated content. While it’s fun to generate images and videos using Gen AI, what if AI-generated videos using Neha’s likeness or preferences start surfacing on the platform? Deep fakes are already circulating the web. Last year, PM Modi expressed concern when he found a video of himself doing garba. Last week, a Ukrainian YouTuber was shocked to find her AI clone on Chinese social media.

The ownership rights of this content are uncertain. Recently, for a similar concern, The New York Times sued Open AI alleging the company used millions of its articles to train its chatbots without permission.

If this happens to Neha, she may have unwittingly contributed to the creation of such content through her interactions. The question arises, who owns the content that Neha generates? And who owns the AI-generated content? Is it Neha herself, or do the platforms and AI algorithms that process her data have a stake in the ownership?

What Can You Do?

This is the age of AI and the concept of content ownership has us scrambling to know if a source of information is human or algorithmic. Social media platforms and tech companies argue that they need the data generated by users like Neha to improve services, boost user experience, and drive innovation. On the other hand, consumers are getting uneasy about the lack of transparency and control over the content that they contribute.

It’s a good idea to be aware of the terms and conditions associated with the platforms you use. Reading the fine print and understanding data-sharing policies might be boring, but it can make a difference in your understanding about how you can reclaim some control over the content you generate and share online.

The Need for AI Regulation

This is why AI regulation is of utmost importance. Advocating for stronger data protection regulations and demanding transparency from tech companies is essential in shaping a digital future where users have a clearer understanding of how their data is used and who ultimately owns the content they create.

One of the significant aspects of AI and content today is regulating policy. The policy must ensure that it’s protecting end users while not stifling innovation. One of the key reasons why most countries are not in the race to develop laws around AI is because they fear stifling innovation. But considering the staggering number of AI users in India, a framework that balances innovation and protection will be a challenge.

As Nvidia CEO Jensen Huang said, every country has a need for their own AI infrastructure that can take advantage of the economic potential while protecting its own culture. Recently, Union minister of state for electronics and IT, Rajeev Chandrashekhar addressed NASSCOM stating that the AI regulatory framework will be discussed and debated in June-July this year.

“We will fully exploit the potential of AI but set up the guardrails as well to prevent misuse. We are today seen by the world at the forefront to harness AI technology,” he said.

The ownership of content in the age of AI is a pressing issue that demands our attention. Through the lens of individuals like Neha Rodriguez, we see the intricate dance between personal agency and the algorithms that shape our digital experiences. As we move forward, it is essential for consumers to be informed, empowered, and proactive in shaping a digital landscape that respects and protects their ownership rights in an age increasingly dominated by AI.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *