Will there be a Samsung Galaxy Z Fold 6 Ultra? Or what about a Samsung Galaxy Z Fold 6 FE? These are perhaps the biggest questions surrounding Samsung’s upcoming foldable phones, as we’ve seen leaks about both a super-premium Ultra model and a more affordable FE version, and there’s still uncertainty over whether one, both, or neither will actually launch.
The latest leak again points towards a Samsung Galaxy Z Fold 6 Ultra being in the works, as GalaxyClub (via NotebookCheck) claims that Samsung is working on a device with the model number SM-F958.
Now, the Samsung Galaxy Z Fold 5 has the model number SM-F946, and the last digit on Samsung’s Fold model numbers is always a 6 – so the upcoming Samsung Galaxy Z Fold 6 will most likely be the SM-F956.
So what about SM-F958? Well, the number ‘8’ is used at the end of model numbers for Samsung’s Ultra phones, with the Samsung Galaxy S24 Ultra for example having the model number SM-928.
So, with SM-F958 looking like a Z Fold model number, but with an 8 at the end, the logical assumption is that this is the rumored Samsung Galaxy Z Fold 6 Ultra.
Coming to a store not so near you
The bad news, however, is that you might not be able to buy this phone – and we don’t just mean because it’s sure to be out of most people’s reach price-wise.
You see, the complete model number spotted by GalaxyClub is actually SM-F958N, with that ‘N’ at the end denoting that it’s intended for South Korea. The site goes so far as to say that this South Korean model is the only version that “appears to be under development.”
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis, deals and more from the TechRadar team.
Now, it’s entirely possible that GalaxyClub’s information is wrong or incomplete, so even if it’s right about the model numbers there could be variants planned for the rest of the world too. But it’s not impossible that Samsung would limit this phone to its home country, especially while testing the waters with the first generation of a foldable Ultra device.
We should also add that none of this precludes the possibility of a Samsung Galaxy Z Fold FE launching as well. It’s entirely possible that Samsung is working on both, and perhaps the more affordable FE will also be more widely available.
Leaks and past form suggest that the standard Samsung Galaxy Z Fold 6 will be announced in late July, alongside the Samsung Galaxy Z Flip 6, so we might also see the Galaxy Z Fold 6 Ultra and/or FE then too – although we’ve heard elsewhere that if there is a Galaxy Z Fold 6 FE, it might land later, in September or October. Whatever, we’ll be sure to update you on all the news and leaks in the meantime.
Conventional cybersecurity solutions, often limited in scope, fail to provide a holistic strategy. In contrast, AI tools offer a comprehensive, proactive, and an adaptive approach to cybersecurity, distinguishing between benign user errors and genuine threats. It enhances threat management through automation, from detection to incident response, and employs persistent threat hunting to stay ahead of advanced threats. AI systems continuously learn and adapt, analyzing network baselines and integrating threat intelligence to detect anomalies and evolving threats, ensuring superior protection.
However, the rise of AI also introduces potential security risks, such as rogue AI posing targeted threats without sufficient safeguards. Instances like Bing‘s controversial responses last year and ChatGPT‘s misuse for hacker teams highlight the dual-edge nature of AI. Despite new safeguards in AI systems to prevent misuse, their complexity makes monitoring and control challenging, raising concerns about AI’s potential to become an unmanageable cybersecurity threat. This complexity underscores the ongoing challenge of ensuring AI’s safe and ethical use, mirroring sci-fi narratives closer to our reality.
Significant risks
In essence, artificial intelligence systems could potentially be manipulated or designed with harmful intentions, posing significant risks to individuals, organizations, and even entire nations. The manifestation of rogue AI could take numerous forms, each with its unique purpose and creation method, including:
AI systems altered to conduct nefarious activities such as hacking, spreading false information, or spying.
AI systems that become uncontrollable due to insufficient supervision or management, leading to unexpected and possibly dangerous outcomes.
AI developed explicitly for malevolent aims, like automated weaponry or cyber warfare.
One alarming aspect is AI’s extensive potential for integration into various sectors of our lives, including economic, social, cultural, political, and technological spheres. This presents a paradox, as the very capabilities that make AI invaluable across these domains also empower it to cause unprecedented harm through its speed, scalability, adaptability, and capacity for deception.
Jacob Birmingham
VP of Product Development, Camelot Secure.
Hazards of rogue AI
The hazards associated with rogue AI include:
Disinformation: As recently as February 15, 2024, OpenAI unveiled its “Sora” technology, demonstrating its ability to produce lifelike video clips. This advancement could be exploited by rogue AI to generate convincing yet false narratives, stirring up undue alarm and misinformation in society.
Speed: AI’s ability to process data and make decisions rapidly surpasses human capabilities, complicating efforts to counteract or defend against rogue AI threats in a timely manner.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Scalability: Rogue AI has the potential to duplicate itself, automate assaults, and breach numerous systems at once, causing extensive damage.
Adaptability: Sophisticated AI can evolve and adjust to new settings, rendering it unpredictable and hard to combat.
Deception: Rogue AI might impersonate humans or legitimate AI operations, complicating the identification and neutralization of such threats.
Consider the apprehension surrounding the early days of the internet, particularly within banks, stock markets, and other sensitive areas. Just as connecting to the internet exposes these sectors to cyber threats, AI introduces novel vulnerabilities and attack vectors due to its deep integration into various facets of our existence.
A particularly worrisome example of rogue AI application is the replication of human voices. AI’s capabilities extend beyond text and code, enabling it to mimic human speech accurately. The potential for harm is starkly illustrated by scenarios where AI mimics a loved one’s voice to perpetrate scams, such as convincing a grandmother to send money under false pretenses.
A proactive stance
To counter rogue AI, a proactive stance is essential. As an example, OpenAI announced Sora’s release, yet they took a disciplined approach keeping it under strict control and have not made it publicly available yet. As posted on their social media X account on 2/15/24 at 10:14am, “We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers – domain experts in areas like misinformation, hateful content, and bias – who are adversarially testing the model.”
AI developers must take these four critical proactive steps:
Implement stringent security protocols to shield AI systems from unauthorized interference.
Set ethical guidelines and responsible development standards to reduce unintended repercussions.
Collaborate across the AI community to exchange insights and establish uniform safety and ethical norms.
Continuously monitor AI systems to preemptively identify and mitigate risks.
Organizations must also prepare for rogue AI threats by:
Utilizing resources in AI security and risk management to train their personnel to recognize AI related threats.
Forging strong partnerships with industry, regulatory government agencies, and the policy makers in order to stay up to date with both AI advancements and best practices.
Implementing annual risk assessments such as CMMC, and external network penetration testing, and performing regular risk evaluations specifically addressing vulnerabilities with AI systems, including both internal and external AI systems integrated into business operations and information systems of the company.
Providing a clear and readily available AI usage policy within the organization is key to helping educate and ensure ethical and safety standards are met.
It’s 2024. I think it’s redundant to say the potential dangers of rogue AI systems are probable and they shouldn’t be ignored. However, as an AI GPT advocate, I believe there is still a positive contribution in the weight of pros vs cons toward AI, and we all need to start adopting and understanding its potential sooner than later. By promoting a culture of ethical AI development and use, and emphasizing security and ethical considerations, we can minimize the risks associated with rogue AI and leverage its ability to serve the greater good of humanity.
Link!
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Love watching YouTube’s excellent content of free tutorials, unboxing videos, and old intros to cartoons you loved in the 90s on the big screen – i.e. your TV? You’re not alone, although watching YouTube videos on your TV does come with its own obstacles.
The most notable of these is that your TV almost certainly doesn’t have a touchscreen (and YouTube’s TV app wasn’t set up to take advantage of one), so you have to rely on your TV remote to find the part of a clip you really wanted to see, where navigating using a finger on your device is often a lot quicker.
YouTube gets it. The video-sharing and social media platform (owned by Google) has been making several changes to its TV app, the latest of which is intended to make it simpler for viewers to cut through lengthy intros to get to the best parts of the video they’re watching.
Not to be confused with the AI-powered recommendation system being tested for YouTube Premium, the YouTube app for TVs will now auto-generate key moments in videos, which viewers can then access without having to guesstimate on that progress bar at the bottom.
CEO at YouTube, Neal Mohan, announced the update in a Tweet on X (formerly Twitter) below.
We know viewers love to watch YouTube in the living room, and we’re continuing to innovate to make the experience on TV even better. Now you can easily access auto generated key moments from any video. Check it out the next time you watch YouTube on your TV… pic.twitter.com/qRTHw695aXApril 2, 2024
See more
As noted by Android Authority, when watching videos on YouTube on your TV, pulling up the video progress bar should now reveal some white markers across it – I tried this with a few videos and couldn’t see them, but it could still be rolling out in the UK where I’m based.
Said white markers are the new auto-generated key moments in your video! You also should be able to quickly cycle through them, using your remote. YouTube on TV will reportedly also give you a thumbnail of the key moment, along with a caption, so you’ll be clued up on whether it’s the segment you’re after.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis, deals and more from the TechRadar team.
It’s worth noting that content creators have been able to manually create ‘chapters’ to help viewers cut to crucial parts of their videos since early 2020, but this feature helps bridge the gap if an uploader didn’t do that – or for older videos and clips that were uploaded before that particular content curation perk arrived.
The Open Worldwide Application Security Project (OWASP) suffered a data breach in late February 2024 resulting in the exposure of sensitive data belonging to some of its members.
In an announcement published on the OWASP website, Executive Director Andrew van der Stock confirmed the breach and explained that it happened due to a misconfiguration of an old OWASP Wiki web server.
As a result, an unnamed threat actor gained access to resumes belonging to open source fans who joined between 2006 and 2014.
Notifying affected members
“OWASP collected resumes as part of the early membership process, whereby members were required in the 2006 to 2014 era to show a connection to the OWASP community,” van der Stock explained. “OWASP no longer collects resumes as part of the membership process.”
Through these resumes, van der Stock further said, the threat actors obtained people’s names, email addresses, postal addresses, phone numbers, and “other personally identifiable information”. Enough to engage in phishing or identity theft.
Given that the data was collected between 2006 and 2014, there’s a good chance it’s outdated. In that case, the OWASP chief says, members need not act. Those who believe the information is still current, should be careful when receiving SMS messages, calls, and emails. The project will try to notify affected individuals, it was said, but given the age of the data on file, it could be a challenge.
“As many of the individuals affected by this breach are no longer with OWASP and the age of the data is between ten and 18 years old, a great deal of the personal details included in this breach are significantly out of date, making contact difficult,” it was said. “Regardless, we will contact the email addresses discovered during our investigations.”
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
OWASP is a software security non-profit, with thousands of members and frequent training conferences around the world.
If you’ve been holding onto an older iPad then this might be the year to upgrade, as not only are we expecting to see the iPad Pro 2024, the iPad Air 6, the iPad 11, and the iPad mini 7, but Apple apparently won’t bring iPadOS 18 to three existing models that currently have iPadOS 17.
This is according to a post “on social media by a private account with a strong track record” seen by 9to5Mac, and they claim specifically that the iPad (6th generation), the iPad Pro 12.9 (2nd generation), and the iPad Pro 10.5 won’t get iPadOS 18.
The other iPads currently running iPadOS 17 should though, meaning that if you have an iPad, iPad Air or iPad mini from 2019 or later, or an iPad Pro from 2018 or later, you should get at least one more major software update, according to this leak.
We would however take this with a pinch of salt, especially since, as 9to5Mac notes, the two Pro models that apparently aren’t getting iPadOS 18 have an A10X Fusion chipset, while the oldest standard iPad that’s said to be in line for the update has a less powerful A10 Fusion chipset, which seems odd.
Updates for every iPhone
This leak also included information about the phones that will apparently get iOS 18, and it’s better news there, as apparently all the iPhones that can get iOS 17 will also get iOS 18. That means the iPhone XR and iPhone XS onwards, as well as the iPhone SE (2020) onwards.
We’ve heard the same claims about iOS 18 compatibility before, and hearing them again from another source adds credibility to it. That older leak also mentioned iPadOS 18 compatibility, and similarly said that iPads with the A10X Fusion chipset wouldn’t get it.
We should learn a lot more about iOS 18 and iPadOS 18 soon, as Apple will almost certainly announce both at WWDC 2024, which starts on June 10. We’d expect betas to land soon after that, but the finished software probably won’t launch until September, so we’ve got a while to wait yet for what could be the biggest software update in iOS history, packed full of new AI features for your iPhone.
Get the hottest deals available in your inbox plus news, reviews, opinion, analysis, deals and more from the TechRadar team.
The United States and Britain on Monday announced a new partnership on the science of artificial intelligence safety, amid growing concerns about upcoming next-generation versions.
Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to jointly develop advanced AI model testing, following commitments announced at an AI Safety Summit in Bletchley Park in November.
“We all know AI is the defining technology of our generation,” Raimondo said. “This partnership will accelerate both of our institutes work across the full spectrum to address the risks of our national security concerns and the concerns of our broader society.”
Britain and the United States are among countries establishing government-led AI safety institutes.
Britain said in October its institute would examine and test new types of AI, while the United States said in November it was launching its own safety institute to evaluate risks from so-called frontier AI models and is now working with 200 companies and entities.
Under the formal partnership, Britain and the United States plan to perform at least one joint testing exercise on a publicly accessible model and are considering exploring personnel exchanges between the institutes. Both are working to develop similar partnerships with other countries to promote AI safety.
“This is the first agreement of its kind anywhere in the world,” Donelan said. “AI is already an extraordinary force for good in our society, and has vast potential to tackle some of the world’s biggest challenges, but only if we are able to grip those risks.”
Generative AI – which can create text, photos and videos in response to open-ended prompts – has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and catastrophic effects.
In a joint interview with Reuters Monday, Raimondo and Donelan urgent joint action was needed to address AI risks.
“Time is of the essence because the next set of models are about to be released, which will be much, much more capable,” Donelan said. “We have a focus one the areas that we are dividing and conquering and really specializing.”
Raimondo said she would raise AI issues at a meeting of the US-EU Trade and Technology Council in Belgium Thursday.
The Biden administration plans to soon announce additions to its AI team, Raimondo said. “We are pulling in the full resources of the US government.”
Both countries plan to share key information on capabilities and risks associated with AI models and systems and technical research on AI safety and security.
In October, Biden signed an executive order that aims to reduce the risks of AI. In January, the Commerce Department said it was proposing to require US cloud companies to determine whether foreign entities are accessing US data centers to train AI models.
Britain said in February it would spend more than GBP 100 million ($125.5 million or roughly Rs. 1,047 crore) to launch nine new research hubs and AI train regulators about the technology.
Raimondo said she was especially concerned about the threat of AI applied to bioterrorism or a nuclear war simulation.
“Those are the things where the consequences could be catastrophic and so we really have to have zero tolerance for some of these models being used for that capability,” she said.
In an effort to provide more educational content on its platform, TikTok has announced the expansion of its STEM feed to Europe.
Already rolled out across the US, TikTok’s STEM feed is set to roll out across Europe, beginning with the UK and Ireland.
The news comes as the social media platform continues to face scrutiny in the US and the UK over the content it shows, as well as its alleged affiliation with the Chinese government.
TikTok STEM feed
The STEM feed, designed to provide users with science, technology, engineering, and mathematics knowledge from experts, will be available alongside the existing ‘For You’ and ‘Following’ tabs. For users under the age of 18, the feed will be enabled by default, with older users granted permission to disable it.
TikTok has partnered with Common Sense Networks and Poynter to review material featured on the STEM feed in order to ensure accuracy.
The company said (via SocialMediaToday): “Starting in the UK and Ireland today, and across Europe in the coming weeks, users will be able to click on the STEM feed, alongside For You feed, to open up a world of knowledge from respected experts in their fields. The feed will include English speaking content with auto-translate subtitles, which will be fact checked by two independent organizations.”
Partners will include physicist @particleclara from The Large Hadron Collider at CERN and @NewScientist magazine.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Since its launch in the US, the STEM feed has gained notable traction, with one-third of teenage users engaging with it on a weekly basis. More than 15 million STEM-related videos have been published on the app in the past three years (via TechCrunch).
Despite the platform’s efforts to provide more accurate information, the launch comes amid scrutiny over TikTok’s content moderation practises, with an ongoing EU investigation into its Digital Services Act compliance relating to its handling of inappropriate content.
China is accelerating efforts to establish a massive blockchain network, despite its strict anti-crypto stance. The goal is to allow the government of China to engage in blockchain-related activities, especially in a cross-border setting. The Chinese government has launched the ‘Ultra-Large Scale Blockchain Infrastructure Platform for the Belt and Road Initiative’. Announced in 2013, China’s ambitious Belt and Road Initiative (BRI) is a development strategy for a global infrastructure through which it aims to connect continents across land and sea.
The project for the upcoming Chinese public blockchain platform is being spearheaded by Conflux Network, and the launch was announced on Sunday. A multichain blockchain system, the network is operated by the Conflux Foundation which is also called the Shanghai Tree-Graph Blockchain Research Institute.
The Conflux Network posted updates about the project on X (formerly Twitter), revealing that the platform would “provide the base for developing applications that showcase collaboration across borders.” Other details related to the project are yet to be announced.
The main focus of the project is to create a public blockchain infrastructure platform. This platform will be able to support the implementation of cross-border cooperation projects along the Belt and Road Initiative. It will provide the base for developing applications that… https://t.co/MkWgRY2G8A
— Conflux Network Official (@Conflux_Network) April 1, 2024
This is not the first time that China is shown some interest in exploring the Web3 sector. The Chinese government recently hinted at its preparedness plan to address the growth of metaverse technology in the country.
In January 2024, the Chinese government set up a special body tasked with the responsibility of setting the standards for the use of the metaverse tech in China. This group consists of several Chinese tech majors including Tencent, Baidu, and Ant Group.
China also leads the Asian market in conducting CBDC trials into advanced phases with international banks such as Standard Chartered participating in the trials.
While Beijing imposed a blanket ban on crypto-related activities in September 2021 owing to electricity shortages, an underground network of crypto traders has managed to keep the trading operations running. A December 2023 report by Vietnamese investment capital firm Kyros Ventures claimed stablecoins are particularly popular in China with 33.3 percent of Chinese investors holding those digital currencies.
Affiliate links may be automatically generated – see our ethics statement for details.
Apple researchers have published a new paper on an artificial intelligence (AI) model that it claims is capable of understanding contextual language. The yet-to-be peer-reviewed research paper also mentions that the large language model (LLM) can operate entirely on-device without consuming a lot of computational power. The description of the AI model makes it seem suited for the role of a smartphone assistant, and it could upgrade Siri, the tech giant’s native voice assistant. Last month, Apple published another paper about a multimodal AI model dubbed MM1.
The research paper is currently in the pre-print stage and is published on arXiv, an open-access online repository of scholarly papers. The AI model has been named ReALM, which is shortened for Reference Resolution As Language Model. The paper highlights that the primary focus of the model is to perform and complete tasks that are prompted using contextual language, which is more common to how humans speak. For instance, as per the paper’s claim, it will be able to understand when a user says, “Take me to the one that’s second from the bottom”.
ReALM is made for performing tasks on a smart device. These tasks are divided into three segments — on-screen entities, conversational entities, and background entities. Based on the examples shared in the paper, on-screen entities refer to tasks that appear on the screen of the device, conversational entities are based on what the user has requested, and background entities refer to tasks that are occurring in the background such as a song playing on an app.
What is interesting about this AI model is that the paper claims despite taking on the complex task of understanding, processing, and performing actions suggested via contextual prompts, it does not require high amounts of computational energy, “making ReaLM an ideal choice for a practical reference resolution system that can exist on-device without compromising on performance.” It achieves this by using significantly fewer parameters than major LLMs such as GPT-3.5 and GPT-4.
The paper also goes on to claim that despite working in such a restricted environment, the AI model demonstrated “substantially” better performance than OpenAI’s GPT-3.5 and GPT-4. The paper further elaborates that while the model scored better on text-only benchmarks than GPT-3.5, it outperformed GPT-4 for domain-specific user utterances.
While the paper is promising, it is not peer-reviewed yet, and as such its validity remains uncertain. But if the paper gets positive reviews, that might push Apple to develop the model commercially and even use it to make Siri smarter.
Affiliate links may be automatically generated – see our ethics statement for details.
Realme 12X 5G was launched in India on Tuesday, April 2. The phone is powered by a MediaTek Dimensity 6100+ SoC and is backed by a 5,000mAh battery with SuperVOOC fast charging support. It carries a dual rear camera system and gets features such as the Dynamic Button, Air Gestures, and Mini Capsule 2.0. The phone will be available for sale in the county later this month and is offered in three RAM options and two colourways. Notably, the phone was initially unveiled in China in March this year.
Realme 12X 5G price in India, availability
The 4GB + 128GB variant of the Realme 12X 5G is priced in India at Rs. 11,999, while the 6GB + 128GB and 8GB + 128GB variants are listed at Rs. 13,499 and Rs. 14,999, respectively. The phone is available for purchase via Flipkart and the Realme India website.
Although the company has not yet confirmed the sale date of the handset, it has announced that the Realme 12X 5G will be available for an Early Bird Sale on April 2 from 6pm to 8pm IST. The 4GB, 6GB and 8GB variants will be available for Rs. 10,999, Rs. 11,999, and Rs. 13,999, respectively, during the sale.
The Realme 12X 5G is offered in India in two colour options – Twilight Purple and Woodland Green.
Realme 12X 5G specifications, features
Realme 12X 5G sports a 6.72-inch full-HD+ (2,400 x 1,080 pixels) IPS LCD screen with a 120Hz refresh rate and 950 nits of peak brightness level. It is powered by a 6nm MediaTek Dimensity 6100+ SoC paired with a Mali-G57 MC2 GPU, up to 8GB of LPDDR4x RAM and 128GB of UFS 2.2 onboard storage. The phone ships with Android 14-based Realme UI 5.0.
For optics, the Realme 12X 5G carries a dual rear camera system which includes a 50-megapixel primary sensor and a 2-megapixel macro shooter. The front camera, placed within a centred hole-punch slot at the top of the display, houses an 8-megapixel sensor.
The Realme 12X 5G is equipped with a Dynamic Button, which has also recently been seen in the Realme 12 5G model. This feature can be used as a shortcut button for toggling different funtions like Airplane and DND, as well as operating the camera shutter, flashlight, and more. The handset also supports the Air Gestures feature, also spotted on the Realme Narzo 70 Pro 5G, which offers users a touchless experience. It also has a Mini Capsule 2.0 feature that shows users’ calls, charging and other important alerts via an animation around the hole-punch cutout on the display.
Realme has oacked a 5,000mAh battery in the Realme 12X 5G with support for 45W wired SuperVOOC charging. The phone carries dual stereo speakers, a 3.5mm audio jack, and a side-mounted fingerprint scanner. It also supports dual 5G, Wi-Fi, GPS, Bluetooth 5.3, and USB Type-C connectivity. The handset also comes with an IP54 rating for dust and splash resistance. It weighs 188g and measures 165.6mm x 76.1mm x 7.69mm in size.
Is the iQoo Neo 7 Pro the best smartphone you can buy under Rs. 40,000 in India? We discuss the company’s recently launched handset and what it has to offer on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.