Edited by Deepali Verma
After deep fake images of Taylor Swift and fake robocalls of Joe Biden’s voice went viral on social media platforms this week, debates among lawmakers to push for stronger guardrails on the usage of Artificial Intelligence has again sprung up.
Clyde Vanel, a Democrat who also happens to be the Chair of the New York State Subcommittee on Internet and New Technology, said that the process started in September, once Gov. Kathy Hochul signed legislation regarding deepfakes.
“What happened to Taylor Swift in New York is illegal,” Vanel remarked “It’s a class A misdemeanor on someone’s part to knowingly or recklessly publish a generated photo or visual depiction of someone with sexually explicit content. We have to make the public aware of what we have in place. We have to make them aware that this is wrong and the authorities have grounds to prosecute these kinds of actions.”
But, with the 2024 election behind right around the corner, Vanel informed that the work isn’t done, as presently New York does not require a disclosure for the usage of Artificial Intelligence when it comes to campaigns, legislation he along with several others have proposed in the past.
“When it comes to campaigns, people want to see the reality, the facts. Who the person is not artificially made or anything,” Sam Patel, who works in Schenectady, said.
“Your views are likely to be changed if you watch a campaign online,” Markus, an 18-year-old Schenectady senior remarked, “Then say ‘Oh…I really support it.’ Then, the thing is he never said that, more so if you already casted your vote or are really dedicated from that standpoint.”
Reports have revealed that Swift is considering legal action against the companies who facilitated the spread of the fake images. Vanel said working with those platforms will be absolutely crucial in helping to prevent fake content from being spread.
“We recently found out that one of the platforms that they cut the staff in this department to address these kinds of things,” Vanel, who made the news when he created legislation via AI, said. “We should ensure that there are certain things in place, in with the platforms, they have the resources to prevent this stuff, and take it down.”
Vanel, while giving an example, posted a deep fake video on his social media this week, saying he created it to look and sound like him.
“The process of what I posted I had to put warnings in place,” he said. “If you saw it, I posted warnings that said this is a deep fake, I made sure I described that it was deepfake.”