Looking at is believing? International scramble to deal with deepfakes

  • Home
  • New
  • Looking at is believing? International scramble to deal with deepfakes

In today’s digital age, many individuals rely on online content to form opinions, analyse facts and make decisions; but in doing so, they have become increasingly vulnerable to misinformation. The rise of deepfakes, or digitally manipulated videos or images, only exacerbates this problem. As international governments grapple with the danger posed by deepfakes, the short but simple advice is: “Don’t believe everything you watch”.

Deepfakes, otherwise known as synthetic media, have become more menacing in recent times as artificial intelligence (AI) technologies have become easier to access. Deepfakes are AI-generated videos or images depicting individuals or scenes that never existed in reality. This high-tech deception can be anything from a celebrity appearing to make a political statement to a world leader announcing the wrong policy. The implications of deepfakes spreading false information are vast and can potentially threaten national security, disrupt political discourse and manipulate elections.

International governments are starting to take action on deepfakes. For example, the European Commission has recently proposed legislation to counter deepfakes before the European Parliamentary elections in May 2019. Although the primary focus has been to tackle disinformation spread by foreign actors, there is also a call to strengthen protections against the risks of deepfakes.

However, deepfakes are not the only kind of misinformation we should be wary of. Social media platforms, for example, are rife with viral hoaxes and half-truths. Such content can easily be reproduced and shared, thereby spreading a false narrative. As such, it is important to take caution from the start by applying a critical lens when reviewing online sources.

It is also ever more necessary to verify the authenticity of sources whenever possible. Advancements in technology make it easier to detect whether a video or image is a deepfake. In addition, biometric data such as fingerprints and facial recognition can be used to confirm identity. Such techniques can help to minimise the risk posed by deepfakes.

In an age where information has become increasingly polarised, deepfakes are just one example of how technologies are being used to manipulate and spread false information. As there is no single solution to tackle this threat, international governments must continue to develop legislation and safeguard civilians from being exposed to deepfakes and other forms of misinformation. The age-old idiom ‘seeing is believing’ must now be revised to ‘research is believing’. [ad_1]

Washington: Chatbots spouting falsehoods, face-swapping apps crafting porn videos and cloned voices defrauding firms of thousands and thousands – the scramble is on to rein in AI deepfakes that have grow to be a misinformation super spreader.

Synthetic Intelligence is redefining the proverb “seeing is believing,” with a deluge of photographs made out of skinny air and individuals proven mouthing issues they hardly ever reported in serious-on the lookout deepfakes that have eroded on-line rely on.

“Yikes. (Certainly) not me,” tweeted billionaire Elon Musk final 12 months in a person vivid case in point of a deepfake video that confirmed him advertising and marketing a crypto forex rip-off.

China a short while ago adopted expansive guidelines to control deepfakes but most countries seem to be having difficulties to maintain up with the speedy-evolving technologies amid issues that regulation could stymie innovation or be misused to curtail no cost speech.

Professionals warn that deepfake detectors are vastly outpaced by creators, who are tricky to capture as they run anonymously utilizing AI-based computer software that was as soon as touted as a specialised ability but is now broadly accessible at reduced cost.

Fb owner Meta very last calendar year mentioned it took down a deepfake online video of Ukrainian President Volodymyr Zelensky urging citizens to lay down their weapons and surrender to Russia.

And British campaigner Kate Isaacs, 30, claimed her “heart sank” when her face appeared in a deepfake porn online video that unleashed a barrage of on the net abuse following an mysterious user posted it on Twitter.

“I don’t forget just emotion like this video was heading to go everywhere – it was horrendous,” Isaacs, who strategies in opposition to non-consensual porn, was quoted as declaring by the BBC in October.

The adhering to month, the British authorities voiced problem about deepfakes and warned of a preferred website that “just about strips females naked.”

‘Information apocalypse’

With no barriers to creating AI-synthesized textual content, audio and movie, the potential for misuse in id theft, monetary fraud and tarnish reputations has sparked international alarm.

The Eurasia team called the AI resources “weapons of mass disruption.”

“Technological advances in artificial intelligence will erode social have confidence in, empower demagogues and authoritarians, and disrupt companies and marketplaces,” the group warned in a report.

“Advancements in deepfakes, facial recognition, and voice synthesis software package will render handle over one’s likeness a relic of the past.”

This week AI startup ElevenLabs admitted that its voice cloning tool could be misused for “malicious functions” right after consumers posted a deepfake audio purporting to be actor Emma Watson looking through Adolf Hitler’s biography “Mein Kampf.”

The growing quantity of deepfakes may well direct to what the European legislation enforcement agency Europol explained as an “information apocalypse,” a circumstance in which many folks are unable to distinguish point from fiction.

“Gurus concern this may perhaps guide to a circumstance in which citizens no more time have a shared reality or could develop societal confusion about which facts resources are trustworthy,” Europol claimed in a report.

That was shown previous weekend when NFL player Damar Hamlin spoke to his supporters in a video clip for the first time due to the fact he suffered a cardiac arrest in the course of a match.

Hamlin thanked healthcare pros responsible for his restoration, but quite a few who considered conspiracy theories that the COVID-19 vaccine was behind his on-discipline collapse baselessly labelled his video clip a deepfake.

‘Super spreader’

China enforced new guidelines past thirty day period that will need organizations featuring deepfake companies to obtain the true identities of their buyers. They also involve deepfake material to be correctly tagged to avoid “any confusion.”

The procedures arrived just after the Chinese authorities warned that deepfakes existing a “hazard to national protection and social balance.”

In the United States, exactly where lawmakers have pushed for a activity force to police deepfakes, electronic rights activists warning in opposition to legislative overreach that could kill innovation or target reputable material.

The European Union, in the meantime, is locked in heated conversations in excess of its proposed “AI Act.”

The law, which the EU is racing to pass this year, will involve people to disclose deepfakes but quite a few concern the laws could establish toothless if it does not go over innovative or satirical content.

“How do you reinstate digital rely on with transparency? That is the actual concern suitable now,” Jason Davis, a exploration professor at Syracuse University, advised AFP.

“The (detection) tools are coming and they’re coming somewhat speedily. But the engineering is shifting probably even more quickly. So like cyber security, we will in no way address this, we will only hope to hold up.”

Lots of are now struggling to comprehend innovations this kind of as ChatGPT, a chatbot established by the US-primarily based OpenAI that is capable of producing strikingly cogent texts on virtually any topic.

In a study, media watchdog NewsGuard, which called it the “future terrific misinformation super spreader,” mentioned most of the chatbot’s responses to prompts relevant to subjects such as Covid-19 and university shootings were “eloquent, phony and deceptive.”

“The outcomes confirm fears… about how the tool can be weaponized in the mistaken hands,” NewsGuard mentioned.


Resource link