Archives 2023

Apollo group asteroid to pass Earth soon, says NASA; Know how big it is and how close it will get
Apollo group asteroid to pass Earth soon, says NASA; Know how big it is and how close it will get

[ad_1]

Asteroids are mostly located in the main asteroid belt between the orbits of Mars and Jupiter in our solar system. However, their orbits bring them close to Earth on some occasions, potentially raising the possibility of impact. These close calls with asteroids highlight the importance of continued technological development in asteroid detection and monitoring programs such as NASA’s DART test. This will help to ensure the safety of our planet from the potential impact of these space rocks. With the help of its advanced ground and space-based telescopes, NASA has tracked an asteroid whose orbit will bring it very close to Earth tomorrow, December 28. Know all about this close encounter.

Asteroid 2023 YD: Details

As per the Center for Near-Earth Object Studies (CNEOS), an asteroid, given the designation of Asteroid 2023 YD, is on its way toward Earth and could make its closest approach to the planet tomorrow. This near-Earth space rock is expected to pass Earth today at a close distance of just 605,000 kilometers. It is already hurtling in its orbit at a speed of about 35784 kilometers per hour, which is much faster than Intercontinental Ballistic Missiles (ICBMs)!

It belongs to the Apollo group of Near-Earth Asteroids, which are Earth-crossing space rocks with semi-major axes larger than Earth’s. These asteroids are named after the humongous 1862 Apollo asteroid, discovered by German astronomer Karl Reinmuth in the 1930s, according to NASA.

How big is the asteroid?

In terms of size, Asteroid 2023 YD is nearly 92 feet wide, which makes it almost as big as an aircraft! Despite being much bigger than the Chelyabinsk asteroid which caused damage on Earth in 2013, this space rock isn’t big enough to be classified as a Potentially Hazardous Object and is not expected to cause any damage on Earth.

How do asteroids come close to Earth?

NASA says that the orbits of asteroids can be changed by Jupiter’s massive gravity and by occasional close encounters with planets like Mars or other objects. These accidental encounters can knock asteroids out of the main belt and hurl them into space in all directions across the orbits of the other planets.

[ad_2]

Source link

Huawei Nova 12 Series With 60-Megapixel Selfie Cameras, Up to 100W Fast Charging Launched: Price, Specifications
Huawei Nova 12 Series With 60-Megapixel Selfie Cameras, Up to 100W Fast Charging Launched: Price, Specifications

[ad_1]

Huawei Nova 12 series comprising four models — Huawei Nova 12, Huawei Nova 12 Pro, Huawei Nova 12 Ultra and Huawei Nova 12 Lite —was launched on Tuesday in China. All four new smartphones come with similar designs with slightly different specifications. They all run on HarmonyOS 4 and feature 6.7-inch displays. The Huawei Nova 12, Huawei Nova 12 Pro, and Nova 12 Ultra are equipped with 4,600mAh battery with support for up to 100W wired fast charging. The Huawei Nova 12 Lite, the entry-level model in the family, is powered by a Snapdragon 778G 4G SoC. All four models flaunt 50-megapixel dual rear camera units. The Huawei Nova 12 Pro and Huawei Nova 12 Ultra offer satellite connectivity.

Huawei Nova 12 series price, availability

The price of Huawei Nova 12 starts at CNY 2,999 (roughly Rs. 34,000) for the base variant with 256GB of storage. The 512GB storage variant costs CNY 3,399 (roughly Rs. 40,000). The Huawei Nova 12 Pro comes in 256GB and 512GB storage options, priced at CNY 3,999 and CNY 4,399 (roughly Rs. 51,500), respectively. The Huawei Nova 12 Ultra is priced at CNY 4,699 (roughly Rs. 54,000) for the 512GB storage version and CNY 5,499 (roughly Rs. 64,000) for the 1TB storage variant, respectively.

The Huawei Nova 12 Lite is offered with 256GB and 512GB storage, costing CNY 2,499 (roughly Rs. 31,800) and CNY 2,799 (roughly Rs. 31,800), respectively.

The vanilla Huawei Nova 12 and  Nova 12 Lite are offered in Colour No. 12, Cherry White and Obsidian Black shades, while the Huawei Nova 12 Pro comes in Colour No. 12, Obsidian Black, Cherry Blossom Pink, and Cherry Blossom White colour options. The Huawei Nova 12 Ultra Colour No. 12, Smoky Gray, Obsidian Black shades.

All four phones are up for pre-orders in China through VMall. The Huawei Nova 12 Ultra will go on sale starting January 12 next year, while the rest will be available beginning January 5.

Huawei Nova 12, Huawei Nova 12 Pro, and Huawei Nova 12 Ultra specifications

The Huawei Nova 12, Huawei Nova 12 Pro, and Huawei Nova 12 Ultra run on HarmonyOS 4 and features a 6.7-inch full-HD+(1,224 x 2,776 pixels) OLED LTPO display with up to 120Hz refresh rate, 2,160Hz high-frequency PWM dimming, 300Hz touch sampling rate. The display has a pill-shaped camera island that houses the selfie camera. Huawei hasn’t confirmed the processor of these three models on its official website, but they are believed to be powered by the Kirin chipsets. The base models pack up to 512GB of storage, while the Nova 12 Ultra offers a maximum of 1TB of onboard storage.

For optics, the Huawei Nova 12, Huawei Nova 12 Pro, and Huawei Nova 12 Ultra have a dual rear camera setup, headlined by a 50-megapixel sensor. The primary camera sensor of the Pro and Ultra model has a variable aperture ranging from f/1.4 to f/4.0. The camera unit also includes an 8-megapixel ultra-wide-angle sensor. Huawei has provided a dual front camera setup for selfies on the Nova 12 Pro and Nova 12 Ultra, comprising a 60-megapixel primary shooter and an 8-megapixel portrait sensor. The regular Huawei Nova 12 has a single 60-megapixel ultra-wide-angle camera.

Connectivity options on the Huawei Nova 12, Huawei Nova 12 Pro, and Huawei Nova 12 Ultra include Wi-Fi 802.11 a/b/g/n/ac/ax, Bluetooth 5.2, GPS, AGPS, GLONASS, Beidou NFC, and a USB Type-C port. There is a fingerprint sensor for biometric authentication, along with an ambient light sensor, colour temperature sensor, compass, gravity sensor, gyroscope, laser focus sensor and proximity light sensor.

All three models pack a 4,600mAh battery with support for up to 100W wired fast charging. The Huawei Nova 12 Pro and Huawei Nova 12 Ultra have a two-way Beidou satellite messaging system. This feature will let users send emergency texts using satellites when they are out of range or not connected to a mobile network.

Huawei Nova 12 Lite specifications

The dual-SIM (Nano) Huawei Nova 12 Lite runs on HarmonyOS 4 and features a 6.7-inch full-HD+ (1,084×2,412 pixels) OLED display with up to 120Hz refresh rate and a 300Hz touch sampling rate. The handset is powered by a Qualcomm Snapdragon 778G 4G SoC, coupled with Adreno 642L GPU and up to 512GB of RAM.

For photography, the Huawei Nova 12 Lite has a dual rear camera unit, comprising a 50-megapixel primary sensor with f/1.9 aperture and an 8-megapixel ultra-wide-angle macro camera with f/2.2 aperture. For selfies and video chats, there is a 60-megapixel camera on the front. Connectivity options and sensors are similar to the other models.

Huawei has packed a 4,500mAh battery on the Nova 12 Lite with support for up to 66W wired fast charging. It measures 161.29×74.96×6.88mm and weighs 168 grams.


Affiliate links may be automatically generated – see our ethics statement for details.

[ad_2]

Source link

OnePlus 12, OnePlus 12R Global Launch Date Announced; Arriving January 23, 2024
OnePlus 12, OnePlus 12R Price in India, Colour Options Tipped Ahead of January 23 Launch

[ad_1]

OnePlus 12 and OnePlus 12R are confirmed to launch in India on January 23. The flagship model was launched in China earlier this month. The Indian variant of the OnePlus 12 is expected to be similar to its Chinese counterpart. The OnePlus 12R, on the other hand, tipped to succeed the OnePlus 11R, is said to be a rebranded OnePlus Ace 3, which is expected to launch in China in early January. Now a tipster has suggested the colour options and price range of the two new OnePlus models in India.

Tipster Yogesh Brar (@heyitsyogesh) said in a post on X that the Indian variant of the OnePlus 12 is likely to launch in Black and Green colour options. He added that the phone will be available in configurations of up to 16GB of RAM and 512GB of onboard storage. According to the tipster, the handset is expected to be priced in the country between Rs. 58,000 and Rs. 60,000. 

In the same post, Brar noted that the OnePlus 12R, which is confirmed to launch alongside the OnePlus 12, is expected to be available in India in Blue and Black colourways. It is said to be offered in similar RAM and storage configurations as the flagship model. The tipster suggests that the phone is likely to be priced between Rs. 40,000 and Rs. 42,000. 

The OnePlus 12R has previously been tipped to be powered by a Snapdragon 8 Gen 2 SoC and ship with Android 14-based OxygenOS 14. It is likely to feature a 6.78-inch LTPO 4.0 ProXDR screen with a refresh rate of up to 120Hz along with Gorilla Glass Victus 2 protection. The triple rear camera unit of the phone is said to include a 50-megapixel IMX890 primary sensor, an 8-megapixel sensor with an ultra-wide lens, and a 2-megapixel macro shooter. It is also tipped to get a 16-megapixel front camera sensor. The phone may pack a 5,500mAh battery with 100W wired fast charging support.

In China, the OnePlus 12 launched with a Snapdragon 8 Gen 3 SoC and Android 14-based ColorOS 14. It sports a 6.82-inch quad-HD+ (1,440 x 3,168 pixels) LTPO OLED panel with a refresh rate of up to 120Hz and a peak brightness of 4,500 nits. The Hasselblad-tuned cameras of the handset come with a 50-megapixel Sony LYT-808 primary sensor with optical image stabilisation (OIS), a 64-megapixel telephoto camera with OIS and 3x optical zoom, and a 48-megapixel sensor with an ultra-wide-angle lens. The front camera has a 32-megapixel sensor. It is backed by a 5,400mAh battery with 100W SuperVOOC, 50W wireless, and 10W reverse wireless charging support.


Affiliate links may be automatically generated – see our ethics statement for details.



[ad_2]

Source link

Deepfake crackdown: Govt issues advisory to social media platforms to abide by IT rules
Deepfake crackdown: Govt issues advisory to social media platforms to abide by IT rules

[ad_1]

Amid growing concerns over deepfakes, the government has directed all platforms to comply with IT rules, as companies have been mandated to inform users in clear terms about prohibited content, and cautioned that violations will attract legal consequences.

IT Ministry will closely observe the compliance of intermediaries (social media and digital platforms) in the coming weeks and decide on further amendments to the IT Rules or the law if and when needed, an official release said.

The government has made it clear to platforms that if legal violations of the IT rules are noted or reported then the consequences under law will follow.

The missive underlines hardening of government stance on the issue, amid growing concerns around misinformation powered by AI – Deepfakes.

We are now on WhatsApp. Click to join.

Earlier, several “deepfake” videos targeting leading actors went viral, sparking public outrage and raising concerns over the misuse of technology and tools for creating doctored content and fake narratives.

The advisory mandates that intermediaries — such as WhatsApp, Facebook, X, and others — communicate prohibited content specified under IT Rules clearly and precisely to users.

“The Ministry of Electronics and Information Technology (MEITY) has issued an advisory to all intermediaries, ensuring compliance with the existing IT rules,” the official release said.

The directive specifically targets the growing concerns around misinformation powered by AI – Deepfakes, the release said.

This advisory is the culmination of discussions spearheaded by Minister of State for IT Rajeev Chandrasekhar with intermediaries on the issue.

“The content not permitted under the IT Rules, in particular those listed under Rule 3(1)(b), must be clearly communicated to the users in clear and precise language including through its terms of service and user agreements and the same must be expressly informed to the user at the time of first-registration and also as regular reminders, in particular, at every instance of login and while uploading/sharing information onto the platform,” according to the advisory.

The advisory emphasises that digital intermediaries must ensure users are informed about penal provisions, including those in the IPC and the IT Act 2000.

In addition, the advisory said the terms of service and user agreements must clearly highlight that intermediaries/platforms are under obligation to report legal violations to law enforcement agencies under the relevant Indian laws applicable to the context.

“Rule 3(1)(b) within the due diligence section of the IT rules mandates intermediaries to communicate their rules, regulations, privacy policy, and user agreement in the user’s preferred language,” it said.

It is pertinent to mention here that Rule 3(1)(b)(v) explicitly prohibits the dissemination of misinformation and patently false information.

Digital platforms are obliged to ensure reasonable efforts to prevent users from “hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or content prohibited” on digital intermediaries.

The rule aims to ensure platforms identify and promptly remove misinformation, false or misleading content, and material impersonating others, including deepfakes.

Over the last one month, during his meeting with industry leaders on the pressing issue of deepfakes, the minister has highlighted the urgency for all platforms and intermediaries to strictly adhere to current laws and regulations, emphasising that the IT rules comprehensively address the menace of deepfakes.

“Misinformation represents a deep threat to the safety and trust of users on the Internet,” Chandrasekhar said, adding that deepfake, which is misinformation powered by AI, further amplifies the threat to safety and trust of users.

“On November 17, PM alerted the country to the dangers of deepfakes and post that, the ministry has had two Digital India Dialogues with all the stakeholders of the Indian Internet to alert them about the provisions of the IT Rules notified in October 2022, and amended in April 2023 that lays out 11 specific prohibited types of content on all social media intermediaries and platforms.”

Consequently, all intermediaries were asked to exercise due diligence in promptly removing such content from their platforms. He also emphasised that platforms have been duly informed about the legal consequences associated with any violations under the IT rules.

“Today, a formal advisory has been issued incorporating the ‘agreed to’ procedures to ensure that users on these platforms do not violate the prohibited content in Rule 3(1)(b) and if such legal violations are noted or reported then the consequences under law will follow,” the minister said.

[ad_2]

Source link

How to unsubscribe bulk emails in Gmail
How to unsubscribe bulk emails in Gmail

[ad_1]

Unsubscribing from bulk emails in Gmail can be a tedious task, but with a few simple steps, you can reclaim your inbox from unwanted clutter. Here are two methods you can use:
Method 1: Unsubscribing from individual emails

  1. Open the email you want to unsubscribe from.
  2. Look for the “Unsubscribe” link. It’s usually located near the bottom of the email, in the footer or signature. In some cases, you might see a “Manage preferences” or “Update my email settings” link instead.
  3. Click the “Unsubscribe” link. You may be taken to a web page where you need to confirm your unsubscription.
  4. Confirm your unsubscription. Click the “Unsubscribe” button again, or follow the on-screen instructions.

Method 2: Using the Gmail unsubscribe feature

  1. Open your Gmail inbox.
  2. Search for emails from the sender you want to unsubscribe from. You can use the search bar at the top of your inbox.
  3. Hover your mouse over one of the emails from the sender. You’ll see a small menu appear next to the sender’s name.
  4. Click on the “Unsubscribe” button in the menu. You may be taken to a web page where you need to confirm your unsubscription.
  5. Confirm your unsubscription. Click the “Unsubscribe” button again, or follow the on-screen instructions.

Some tips to keep in mind tips:

  • If you can’t find the “Unsubscribe” link in an email, you can try forwarding the email to yourself and then using the Gmail unsubscribe feature.
  • You can also report the email as spam. This will help Gmail filter out similar emails in the future.
  • Be careful about clicking on links in unsubscribe emails. Some scammers may send emails that look like they are from legitimate companies in order to steal your personal information.
  • If you’re still having trouble unsubscribing from bulk emails, you can contact the sender directly and ask them to remove you from their mailing list.

By following these steps, you can easily unsubscribe from bulk emails in Gmail and keep your inbox clutter-free.



[ad_2]

Source link

Apple Watch Import Ban: Apple appeals Apple Watch import ban: Here’s what the company has to say
Apple Watch Import Ban: Apple appeals Apple Watch import ban: Here’s what the company has to say

[ad_1]

Hours after it was reported that US President Joe Biden’s administration declined to veto the decision of the US International Trade Commission (ITC) to ban imports of some Apple Watch models, Apple has appealed the decision in the US Court of Appeals.
The government tribunal had earlier imposed a ban on the import of Apple Watch models, including Apple Watch Series 9 and Apple Watch Ultra 2, based on a complaint from medical monitoring technology company Masimo.
Apple has reportedly also filed an emergency request to pause the ban, asking the Federal Circuit to halt it at least until the US Customs and Border Protection decides whether redesigned versions of its watches infringe Masimo’s patents as well as while the court considers Apple’s request. The customs office is due to make its decision on January 12, news agency Reuters cited Apple as saying.
What Apple has to say
“We strongly disagree with the USITC decision and resulting exclusion order, and are taking all measures to return Apple Watch Series 9 and Apple Watch Ultra 2 to customers in the US as soon as possible,” Apple was quoted as saying.
It must be noted that Apple has been the leading company for several quarters when it comes to shipments of smartwatches. The company’s wearables, home and accessory business, which includes the Apple Watch, brought in $8.28 billion in revenue during the third quarter of 2023.
The development comes a week after the US ITC rejected Apple’s request to pause the ban during the appeal process. It also opposed Apple’s request to halt the ban in a court filing.
Meanwhile, a Masimo spokesperson called the ITC decision “a win for the integrity of the U.S. patent system, and ultimately American consumers.”
Masimo accused Apple of stealing its pulse oximetry technology and incorporating it into some Apple Watch models.



[ad_2]

Source link

Copilot: Users will not have to use Bing app to access Microsoft Copilot on Android, here’s why
Copilot: Users will not have to use Bing app to access Microsoft Copilot on Android, here’s why

[ad_1]

Microsoft has launched a dedicated Copilot app for Android, and as per the listing on the Google Play Store, it was updated on December 19. The app enables users to use Microsoft’s AI-powered Copilot, which means users won’t have to download the Bing mobile app to access Bing Chat (now Copilot). It is not available for iPhones yet.
“Improve Your Productivity with Copilot–Your AI-Powered Chat Assistant,” the description of the app reads.
“Copilot is a pioneering chat assistant from Microsoft powered by the latest OpenAI models, GPT-4 and DALL·E 3. These advanced AI technologies provide fast, complex, and precise responses, as well as the ability to create breathtaking visuals from simple text descriptions. Chat and create all in one place—for free!” it adds.
What Microsoft Copilot app can do
The Microsoft Copilot app for Android devices is pretty much similar to ChatGPT. It allows users to chatbot capabilities, which include image generation through DALL-E 3 as well as return text for emails and documents – just like the way it is done on a PC.
Interestingly, the app allows users to access OpenAI’s latest GPT-4 model for which users have to pay for using it in ChatGPT.
Bing Chat to Copilot
Last month, Microsoft announced that it is renaming Bing Chat and Bing Chat Enterprise to Copilot to streamline the experience across all its Copilot products. It said that users who sign up via Microsoft Entra ID (a cloud-based identity and access management solution) will be able to get access to all Copilot services.
“Over time, our vision is to expand Copilot to any Entra ID user at no additional cost—so wherever and whenever an employee signs into Copilot with their work account they will get commercial data protection,” the company said. The ‘new’ Copilot became generally available on December 1.



[ad_2]

Source link

How Do Gen AI systems Learn? Researchers Have a Magic Tool to Understand AI - Harry Potter
How Do Gen AI systems Learn? Researchers Have a Magic Tool to Understand AI – Harry Potter

[ad_1]

 More than two decades after J.K. Rowling introduced the world to a universe of magical creatures, forbidden forests and a teenage wizard, Harry Potter is finding renewed relevance in a very different body of literature: AI research. A growing number of researchers are using the best-selling Harry Potter books to experiment with generative artificial intelligence technology, citing the series’ enduring influence in popular culture and the wide range of language data and complex wordplay within its pages.  Reviewing a list of studies and academic papers referencing Harry Potter offers a snapshot into cutting-edge AI research — and some of the thorniest questions facing the technology. 

In perhaps the most notable recent example, Harry, Hermione and Ron star in a paper titled “Who’s Harry Potter?” that sheds light on a new technique helping large language models to selectively forget information. It’s a high-stakes task for the industry: Large language models, which power AI chatbots, are built on vast amounts of online data, including copyrighted material and other problematic content. That has led to lawsuits and public scrutiny for some AI companies.

We are now on WhatsApp. Click to join.

The paper’s authors, Microsoft researchers Mark Russinovich and Ronen Eldan, said they’ve demonstrated that AI models can be altered or edited to remove any knowledge of the existence of the Harry Potter books, including characters and plots, without sacrificing the AI system’s overall decision-making and analytical abilities.  The duo said they chose the books because of their universal familiarity. “We believed that it would be easier for people in the research community to evaluate the model resulting from our technique and confirm for themselves that the content has indeed been ‘unlearned,’” said Russinovich, chief technology officer of Microsoft Azure. “Almost anyone can come up with prompts for the model that would probe whether or not it ‘knows’ the books. Even people who haven’t read the books would be aware of plot elements and characters.”

In another study, researchers from the University of Washington in Seattle, University of California at Berkeley and the Allen Institute for AI developed a new language model called Silo that can remove data to reduce legal risks. However, the model’s performance significantly dropped if trained only on low-risk text such as out-of-copyright books or government documents, they said in a paper published earlier this year.

To go deeper, the researchers used Harry Potter books to see if individual pieces of text influence an AI system’s performance. They created two datastores, or  collections of websites and documents. The first included all published books except the first Harry Potter book; another included all books in the series but the second, and so on. “When the Harry Potter books are removed from the datastore, the perplexity gets worse,” the researchers said, referring to the measure of accuracy of AI models. 

AI studies have cited Harry Potter for at least a decade, but it’s become more common as academics and technologists have focused on AI tools that can process and respond to natural language with relevant answers. With Harry Potter, “the abundance of scenes, dialogs, emotional moments make it very relevant to the specific area of natural language processing,” said Leila Wehbe, a Carnegie Mellon researcher who ran a set of experiments in 2014 collecting brain MRI data from people reading Harry Potter stories to better understand language mechanisms. On arXiv, an open-access repository of scientific research, recent papers include, “Machine learning for potion development at Hogwarts,” “Large Language Models Meet Harry Potter” and “Detecting Spells in Fantasy Literature with a Transformer Based Artificial Intelligence.”

Even when it’s not central to the research, Harry Potter is also a favorite literary reference for researchers. One study, for example, used Rowling’s works to test the intelligence of AI systems such as those that spawned the chatbot ChatGPT, a topic that has generated much heat in recent debates. Terrence Sejnowski, who directs the computational neurobiology laboratory at the Salk Institute for Biological Studies, argued in the paper that chatbots merely reflect the intelligence and biases of their users, like the Mirror of Erised in the first Harry Potter book, which reflects a person’s desires back to them“Harry Potter is popular with younger researchers,” said Wehbe. “They would have read them as children or adolescents, thus thinking of them when choosing a written or spoken text corpus.”

[ad_2]

Source link

Chinese foldable phone maker Royole hit by employee protests over unpaid wages
Chinese foldable phone maker Royole hit by employee protests over unpaid wages

[ad_1]

Before Samsung could launch its first-ever Galaxy Fold, a small Chinese company called Royole unveiled the world’s first foldable smartphone. The Royole FlexPai became the world’s first foldable smartphone that was made commercially available. However, the company whose primary business is focused on flexible display technology, now seems to be going through a financial crisis.The foldable brand has reportedly defaulted on its employee wages for nearly a year.
According to a report by China’s News Daily (seen by GizmoChina), around 50 Royole employees resorted to protest demonstrations with banners and slogans at the company’s Display Headquarters entrance in Shenzhen.
After Royole’s CEO Liu Zihong was unable to adhere to the verbal promises made by him, the employees had no option but to protest against the company.
The employees have not received their arrears and wages since November 2022. They are now demanding to resolve the issue and are asking the company to credit their wages as soon as possible.
The report also notes that as a demand, the Employee Rights Protection Committee has signed an application for the settlement of the remaining claims by Shenzhen Royole Technology Co., Ltd. Over the past couple of years, thousands of company employees have already left the company due to the ongoing financial crisis.
What the company has to say
In reply, the company said that its business has suffered significant losses and is only able to pay social security deposits. The company is now left only with 200–300 employees which includes engineers who maintain the production line.
Royole’s move to seek help from the government and investors also haven’t been successful. Meanwhile, employees have claimed that the production line never started in 2022. The government is helping the company to restructure its debt, however, so significant change has been noticed so far.
In 2022, the company launched its last smartphone, the Royole Flexpai 2 that featured up to 12GB of RAM and 512GB of storage. In the last decade, the company has been considered a primary competitor to Samsung Display’s monopoly as it was successful in developing ultra-low-temperature non-silicon process integration technology to create flexible displays.



[ad_2]

Source link

As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections
As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections

[ad_1]

Nearly three years after rioters stormed the U.S. Capitol, the false election conspiracy theories that drove the violent attack remain prevalent on social media and cable news: suitcases filled with ballots, late-night ballot dumps, dead people voting.

Experts warn it will likely be worse in the coming presidential election contest. The safeguards that attempted to counter the bogus claims the last time are eroding, while the tools and systems that create and spread them are only getting stronger.

Many Americans, egged on by former President Donald Trump, have continued to push the unsupported idea that elections throughout the U.S. can’t be trusted. A majority of Republicans (57%) believe Democrat Joe Biden was not legitimately elected president.

Meanwhile, generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities.

“I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified.”

AI DEEPFAKES GO MAINSTREAM

Manipulated images and videos surrounding elections are nothing new, but 2024 will be the first U.S. presidential election in which sophisticated AI tools that can produce convincing fakes in seconds are just a few clicks away.

The fabricated images, videos and audio clips known as deepfakes have started making their way into experimental presidential campaign ads. More sinister versions could easily spread without labels on social media and fool people days before an election, Etzioni said.

“You could see a political candidate like President Biden being rushed to a hospital,” he said. “You could see a candidate saying things that he or she never actually said. You could see a run on the banks. You could see bombings and violence that never occurred.”

High-tech fakes already have affected elections around the globe, said Larry Norden, senior director of the elections and government program at the Brennan Center for Justice. Just days before Slovakia’s recent elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were shared as real across social media regardless.

These tools might also be used to target specific communities and hone misleading messages about voting. That could look like persuasive text messages, false announcements about voting processes shared in different languages on WhatsApp, or bogus websites mocked up to look like official government ones in your area, experts said.

Faced with content that is made to look and sound real, “everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality,” said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.

Republicans and Democrats in Congress and the Federal Election Commission are exploring steps to regulate the technology, but they haven’t finalized any rules or legislation. That’s left states to enact the only restrictions so far on political AI deepfakes.

A handful of states have passed laws requiring deepfakes to be labeled or banning those that misrepresent candidates. Some social media companies, including YouTube and Meta, which owns Facebook and Instagram, have introduced AI labeling policies. It remains to be seen whether they will be able to consistently catch violators.

SOCIAL MEDIA GUARDRAILS FADE

It was just over a year ago that Elon Musk bought Twitter and began firing its executives, dismantling some of its core features and reshaping the social media platform into what’s now known as X.

Since then, he has upended its verification system, leaving public officials vulnerable to impersonators. He has gutted the teams that once fought misinformation on the platform, leaving the community of users to moderate itself. And he has restored the accounts of conspiracy theorists and extremists who were previously banned.

The changes have been applauded by many conservatives who say Twitter’s previous moderation attempts amounted to censorship of their views. But pro-democracy advocates argue the takeover has shifted what once was a flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.

Twitter used to be one of the “most responsible” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group.

“Obviously now they’re on the exact other end of the spectrum,” he said, adding that he believes the company’s changes have given other platforms cover to relax their own policies. X didn’t answer emailed questions from The Associated Press, only sending an automated response.

In the run-up to 2024, X, Meta and YouTube have together removed 17 policies that protected against hate and misinformation, according to a report from Free Press, a nonprofit that advocates for civil rights in tech and media.

In June, YouTube announced that while it would still regulate content that misleads about current or upcoming elections, it would stop removing content that falsely claims the 2020 election or other previous U.S. elections were marred by “widespread fraud, errors or glitches.” The platform said the policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

Lehrich said even if tech companies want to steer clear of removing misleading content, “there are plenty of content-neutral ways” platforms can reduce the spread of disinformation, from labeling months-old articles to making it more difficult to share content without reviewing it first.

X, Meta and YouTube also have laid off thousands of employees and contractors since 2020, some of whom have included content moderators.

The shrinking of such teams, which many blame on political pressure, “sets the stage for things to be worse in 2024 than in 2020,” said Kate Starbird, a misinformation expert at the University of Washington.

Meta explains on its website that it has some 40,000 people devoted to safety and security and that it maintains “the largest independent fact-checking network of any platform.” It also frequently takes down networks of fake social media accounts that aim to sow discord and distrust.

“No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times,” the posting says.

Ivy Choi, a YouTube spokesperson, said the platform is “heavily invested” in connecting people to high-quality content on YouTube, including for elections. She pointed to the platform’s recommendation and information panels, which provide users with reliable election news, and said the platform removes content that misleads voters on how to vote or encourages interference in the democratic process.

The rise of TikTok and other, less regulated platforms such as Telegram, Truth Social and Gab, also has created more information silos online where baseless claims can spread. Some apps that are particularly popular among communities of color and immigrants, such as WhatsApp and WeChat, rely on private chats, making it hard for outside groups to see the misinformation that may spread.

“I’m worried that in 2024, we’re going to see similar recycled, ingrained false narratives but more sophisticated tactics,” said Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas. “But on the positive side, I am hopeful there is more social resilience to those things.”

THE TRUMP FACTOR

Trump’s front-runner status in the Republican presidential primary is top of mind for misinformation researchers who worry that it will exacerbate election misinformation and potentially lead to election vigilantism or violence.

The former president still falsely claims to have won the 2020 election.

“Donald Trump has clearly embraced and fanned the flames of false claims about election fraud in the past,” Starbird said. “We can expect that he may continue to use that to motivate his base.”

Without evidence, Trump has already primed his supporters to expect fraud in the 2024 election, urging them to intervene to “ guard the vote ” to prevent vote rigging in diverse Democratic cities. Trump has a long history of suggesting elections are rigged if he doesn’t win and did so before voting in 2016 and 2020.

That continued wearing away of voter trust in democracy can lead to violence, said Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, which tracks misinformation.

“If people don’t ultimately trust information related to an election, democracy just stops working,” he said. “If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act.”

ELECTION OFFICIALS RESPOND

Election officials have spent the years since 2020 preparing for the expected resurgence of election denial narratives. They’ve dispatched teams to explain voting processes, hired outside groups to monitor misinformation as it emerges and beefed up physical protections at vote-counting centers.

In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.

“This is an uphill battle, but we have to be proactive,” she said. “Misinformation is one of the biggest threats to American democracy we see today.”

Minnesota Secretary of State Steve Simon’s office is spearheading #TrustedInfo2024, a new online public education effort by the National Association of Secretaries of State to promote election officials as a trusted source of election information in 2024.

His office also is planning meetings with county and city election officials and will update a “Fact and Fiction” information page on its website as false claims emerge. A new law in Minnesota will protect election workers from threats and harassment, bar people from knowingly distributing misinformation ahead of elections and criminalize people who non-consensually share deepfake images to hurt a political candidate or influence an election.

“We hope for the best but plan for the worst through these layers of protections,” Simon said.

In a rural Wisconsin county north of Green Bay, Oconto County Clerk Kim Pytleski has traveled the region giving talks and presentations to small groups about voting and elections to boost voters’ trust. The county also offers equipment tests in public so residents can observe the process.

“Being able to talk directly with your elections officials makes all the difference,” she said. “Being able to see that there are real people behind these processes who are committed to their jobs and want to do good work helps people understand we are here to serve them.”

We are now on WhatsApp. Click to join.

[ad_2]

Source link