After dramatic twists, AI in 2023 goes mainstream; India sets user safety, privacy, guardrails

[ad_1]

India enacted new laws on data privacy and storage as it moved to a higher gear to safeguard users and define the compliance framework for BigTech in 2023 when technology galloped at an unprecedented pace.

From excitement and awe around Artificial Intelligence (AI) to the virality of OpenAI’s ChatGPT and India’s crackdown on deepfakes to New Delhi’s determined moves on digital sovereignty, dramatic twists and turns in the tech landscape defined an eventful 2023. The spotlight was clearly on the social media platforms and BigTech.

India’s resolve to safeguard netizens from new kinds of user harms emerging in the digital space was clear, as the government moved decisively to craft regulations and laws.

It crafted a future-ready framework that would not just protect digital personal data but also ensure an open, safe and trusted internet backed by tighter accountability for digital and social media platforms operating in India.

The government talked tough with social media platforms after several ‘deepfake’ videos targeting leading actors, including Rashmika Mandanna, went viral, sparking public outrage and raising concerns over the weaponisation of technology for creating doctored content and harmful narratives.

It also asked platforms to act decisively on deepfakes and align their terms of use and community guidelines as per the IT rules and current laws.

Further, the government made it clear that any compliance failure will be dealt with strictly and evoke legal consequences.

Globally, macroeconomic woes and growth challenges kept BigTech on the edge as companies tightened their belt and indeed their purse strings and resorted to mass downsizing at the start of 2023.

Social media space continued to buzz with action amid the rising appeal of shareable, bite-sized short-form videos and the popularity of apps and memes to suit every occasion.

Elon Musk’s social media platform Twitter was rebranded as X in July 2023, and in one swift move, the familiar blue bird logo officially retired.

Meta launched Twitter-rival app Threads, which scored an instant hit with millions of downloads but lost steam in the weeks that followed.

In January, the government set up three grievances appellate committees to address users’ complaints against social media and other internet-based platforms.

In August, Parliament approved the Digital Personal Data Protection Bill (DPDP) aimed at safeguarding the digital personal data of 1.4 billion citizens and underlining India’s digital sovereignty.

The freshly minted law will arm individuals with greater control over their data while allowing companies to transfer users’ data abroad for processing, except to territories restricted by the Centre through notification.

It also gives the government power to seek information from firms and issue directions to block content.

The law envisages the establishment of the Data Protection Board of India, tasked with monitoring compliance, inquiring into breaches, imposing penalties, and directing remedial or mitigation measures in case of data breach.

2023 was also a year when governments across the world moved to formulate rules around content accountability and confronted ethical questions as Artificial Intelligence went mainstream, holding out both the promise of a transformative future and fears of a dystopian society plagued with misinformation, AI-laced weapons, and job losses.

As machines demonstrated impressive capabilities of reasoning and human-like decisions, AI advances rivalled sci-fi movie scripts complete with doomsday prophecies.

The most powerful faces of the global tech industry, including OpenAI CEO Sam Altman, joined the chorus of voices flagging the risk of extinction from AI on par with nuclear war and pandemics as they exhorted policymakers to move with caution.

Altman, 38, fended off his share of boardroom hi-drama as he was fired, then reinstated, in a five-day epic battle for control of a company that is at the forefront of AI.

Back home, with 830 million internet users in the world’s largest ‘digitally connected democracy’ governed by a two-decades-old IT Act that lacks teeth to deal with sophisticated forms of user harm (doxxing, cyber stalking, and online trolling), the government initiated stakeholder dialogues to formulate draft ‘Digital India Act’.

The draft Bill is likely to spell out norms for emerging technologies, big tech platforms and firms, moot different obligations for various classes of intermediaries, say, e-commerce, search engines, gaming, and tighten the accountability of digital platforms in the era of rampant misinformation, deepfakes and new forms of user harms.

The bill — to be legislated after the 2024 elections and formation of the new government — is widely expected to take into account online offences, including gaslighting, cat-fishing, cryptojacking, astroturfing, doxxing, transmission of misinformation, stalking or harassment over the internet, deepfake, and cyber-mob attacks, encoding them into the list of user harms.

The bill is unlikely to have hardcoded immunity provisions for social media companies this time around and instead, the Center will notify whether any class of intermediary is eligible to claim exemptions from third-party digital content.

There has been a raging debate globally, and in India, on whether online intermediaries should be entitled to ‘safe harbour’ provisions as a ‘free pass’. The regulatory overhaul – in the works – may instead do away with such default provisions to keep the sweeping power of platforms under check.

During 2023, possibilities of AI harm galvanised policymakers globally.

The US and UK opened new chapters in AI safety as nations came together for urgent conversations on the future of AI and how to maximise benefits and mitigate risks.

In December, the Global Partnership on AI, an alliance of 29 members, unanimously adopted the New Delhi declaration, pledging their commitment to a collaborative approach for AI applications that benefit people and create a global governance framework for safe and trusted AI.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *