The Breadcrumbs widget will appear here on the published site.
The creators behind AI-generated influencers are set to compete for a cash prize in the world’s first online beauty pageant exclusively for digitally produced ladies. The event, called ‘Miss AI’, is being organized by the World AI Creator Awards (WAICA) in collaboration with Fanvue – an OnlyFans-like subscription-based platform that already hosts a number of virtual models, including those who offer adult content. The digital contestants hoping to secure the Miss AI crown will be judged on their beauty, underlying tech, as well as their social media pull, according to WAICA’s official website. The AI creator’s “social media clout” will also be assessed based on their engagement numbers with fans, rate of audience growth, and ability to utilize social media platforms such as Instagram. “We share the vision for the WAICAs to become the Oscars of the AI creator economy,” Fanvue co-founder Will Monanage said. In order to participate, competitors are required to submit a 100% AI-generated image of a woman and answer a series of questions about how and why the model was created, how many followers it has, how its content is monetized, how much revenue it receives, and even the stereotypical beauty pageant question about the model’s “one dream to make the world a better place.” It’s noted that creators are allowed to make unlimited submissions to the contest, so long that each one is a different AI-generated model. Aside from the crown, the creators behind top three winning Miss AI contestants will be awarded prizes totaling over $20,000, with the winner also getting $5,000 in cash. The finalists will also be able to earn AI mentorship programmеs, PR services and more. Ironically, the AI contestants will be judged by fellow AI-generated influencers, namely Aitana Lopez and Emily Pellegrini, who have both already amassed hundreds of thousands of followers on Instagram and have been raking in thousands of dollars posing for top clothing brands like Victoria’s Secret and Guess and drawing the attention footballers and billionaires. The AI judges will also be joined by their flesh-and-blood counterparts – entrepreneur and PR adviser Andrew Bloch and beauty pageant historian Sally-Ann Fawcett.
The first AI influencers have already entered the pageant after submissions opened on Sunday, including one named Alba Renai, who recently announced to her 11,000 Instagram fans that she had been hired as the first non-human host on a weekly special segment of ‘Survivor’ in Spain. The exact date on which the award ceremony will be held is still unknown, but the terms and conditions of the contest state that all prizes will be awarded and paid out in full by August 1, 2024. The Blog Tags Widget will appear here on the published site.
Tags:
The Recommended Content Widget will appear here on the published site.
The Breadcrumbs widget will appear here on the published site.
AI fakery is quickly becoming one of the biggest problems confronting us online. Deceptive pictures, videos and audio are proliferating as a result of the rise and misuse of generative artificial intelligence tools.
With AI deepfakes cropping up almost every day, depicting everyone from Taylor Swift to Donald Trump, it's getting harder to tell what's real from what's not. Video and image generators like DALL-E, Midjourney and OpenAI’s Sora make it easy for people without any technical skills to create deepfakes -- just type a request and the system spits it out. These fake images might seem harmless. But they can be used to carry out scams and identity theft or propaganda and election manipulation. Here is how to avoid being duped by deepfakes: How to Spot Deepfake In the early days of deepfakes, the technology was far from perfect and often left telltale signs of manipulation. Fact-checkers have pointed out images with obvious errors, like hands with six fingers or eyeglasses that have differently shaped lenses. But as AI has improved, it has become a lot harder. Some widely shared advice -- such as looking for unnatural blinking patterns among people in deepfake videos -- no longer holds, said Henry Ajder, founder of consulting firm Latent Space Advisory and a leading expert in generative AI. Still, there are some things to look for, he said. A lot of AI deepfake photos, especially of people, have an electronic sheen to them, “an aesthetic sort of smoothing effect” that leaves skin “looking incredibly polished,” Ajder said. He warned, however, that creative prompting can sometimes eliminate this and many other signs of AI manipulation. Check the consistency of shadows and lighting. Often the subject is in clear focus and appears convincingly lifelike but elements in the backdrop might not be so realistic or polished. Look at the Faces Face-swapping is one of the most common deepfake methods. Experts advise looking closely at the edges of the face. Does the facial skin tone match the rest of the head or the body? Are the edges of the face sharp or blurry? If you suspect video of a person speaking has been doctored, look at their mouth. Do their lip movements match the audio perfectly? Ajder suggests looking at the teeth. Are they clear, or are they blurry and somehow not consistent with how they look in real life? Cybersecurity company Norton says algorithms might not be sophisticated enough yet to generate individual teeth, so a lack of outlines for individual teeth could be a clue. Think About the Bigger Picture Sometimes the context matters. Take a beat to consider whether what you're seeing is plausible. The Poynter journalism website advises that if you see a public figure doing something that seems “exaggerated, unrealistic or not in character,” it could be a deepfake. For example, would the pope really be wearing a luxury puffer jacket, as depicted by a notorious fake photo? If he did, wouldn't there be additional photos or videos published by legitimate sources? Using AI to Find the Fakes Another approach is to use AI to fight AI. Microsoft has developed an authenticator tool that can analyze photos or videos to give a confidence score on whether it's been manipulated. Chipmaker Intel's FakeCatcher uses algorithms to analyze an image's pixels to determine if it's real or fake. There are tools online that promise to sniff out fakes if you upload a file or paste a link to the suspicious material. But some, like Microsoft's authenticator, are only available to selected partners and not the public. That's because researchers don't want to tip off bad actors and give them a bigger edge in the deepfake arms race. Open access to detection tools could also give people the impression they are “godlike technologies that can outsource the critical thinking for us" when instead we need to be aware of their limitations, Ajder said. The Hurdles to Finding Fakes All this being said, artificial intelligence has been advancing with breakneck speed and AI models are being trained on internet data to produce increasingly higher-quality content with fewer flaws. That means there’s no guarantee this advice will still be valid even a year from now. Experts say it might even be dangerous to put the burden on ordinary people to become digital Sherlocks because it could give them a false sense of confidence as it becomes increasingly difficult, even for trained eyes, to spot deepfakes. The Blog Tags Widget will appear here on the published site.
Tags:
The Recommended Content Widget will appear here on the published site.
The Breadcrumbs widget will appear here on the published site.
US House and Senate lawmakers have raised alarm bells about the potential use of artificial intelligence in America’s nuclear arsenal, arguing that the technology must not be put in a position to fire off warheads on its own. A group of three Democrats and one Republican introduced a bill that calls for banning AI from being used in a way that could lead to it launching nuclear weapons. If enacted, the legislation would codify a current Pentagon policy that requires a human to be “in the loop” on any launch decisions.
“We want to make sure there’s a human in the process of launching a nuclear weapon if, at any point time, we need to launch a nuclear weapon,” US Representative Ken Buck, a Colorado Republican, said on Friday in a Fox News interview. “So you see sci-fi movies, and the world is out of control because AI has taken over – we’re going to have humans in this process.” Buck alluded to Hollywood’s portrayal of a nightmare scenario in which AI systems gain control of nuclear weapons, as depicted in such films as ‘WarGames’ and ‘Colossus: The Forbidden Project.’ He warned that the use of AI without a human chain of command would be “reckless” and “dangerous.” Representative Ted Lieu agreed, saying, “AI is amazing. It’s going to help society in many different ways. It can also kill us.” Lieu, a California Democrat, is a lead backer of the AI legislation, along with two other Democrats – Representative Don Beyer of Virginia and Senator Edward Markey of Massachusetts. Although the idea of an AI-instigated nuclear war might have been dismissed in the past as science fiction, many scientists believe that it’s no longer a far-fetched risk. A poll released earlier this month by the Stanford Institute for Human-Centered Artificial Intelligence found that 36% of AI researchers agreed that the technology could cause a “nuclear-level catastrophe.” US Central Command (CENTCOM) last week announced the hiring of a former Google executive as its first-ever AI adviser. The Pentagon has asked Congress for $1.8 billion in funding for AI research in its next fiscal year. The Blog Tags Widget will appear here on the published site.
Tags:
The Recommended Content Widget will appear here on the published site.
|
Archives
May 2024
Categories
All
|
4/20/2024
0 Comments