Archives January 2024

Cyberpunk 2077: Phantom Liberty
Cyberpunk 2077 latest patch adds a new customizable feature for Rayfield vehicles and fixes melee finishers

[ad_1]

Cyberpunk 2077 patch 2.11 has added a brand new feature for Rayfield vehicles.

CD Projekt Red dropped the latest update today (January 31) which features a ton of changes to both the base game and the Phantom Liberty expansion, including an exciting new feature to Rayfield-manufactured cars that allows players to change the paint job.

[ad_2]

Source link

password manager security
Thousands of Jenkins instances exposed following attack

[ad_1]

Tens of thousands of Jenkins servers are vulnerable to a high-severity bug that allows threat actors to run malicious code on the endpoints, remotely. 

The project recently released two patches addressing the vulnerability, and are urging users to apply them immediately and avoid unnecessary risk.

[ad_2]

Source link

A digital face in profile against a digital background.
ChatGPT: friend or foe for app developers?

[ad_1]

Once considered a laborious, complex, and highly skill requisite task, the world of coding has changed a lot today. Where previously there was a time when everything had to be written from scratch, and coding libraries didn’t exist, the modern day developer has a world of technology at their disposal to make the process easier. For businesses, it means that their developer teams can churn out code faster than ever before, allowing them to better meet the growing demands of consumers for quicker and better applications.

The latest technological development that’s further expediting coding speeds is AI, and more specifically ChatGPT. ChatGPT puts even more power into the hands of developers, with it now possible to auto-generate code in an instant in whatever programming language needed, all by using simple prompts. Whilst the adoption of ChatGPT and other AI tools in the coding space is already well under way, it’s important to stop and take stock of the cybersecurity implications it may bring with it. It is vital that developers are educated about cybersecurity best practices when using these tools to ensure that the code it produces is secure. For all the responsibility that ChatGPT can take on, the ultimate responsibility for making sure code is safe will always lie with humans. For that reason, precaution around how developers are using this technology is essential.

AI: the next step in the coding evolution

One of the aspects I find most enjoyable about software development is its constant evolution. As a developer, you are always seeking ways to enhance efficiency and avoid duplicating code, following the principle of “don't repeat yourself.” Throughout history, humans have sought means to automate repetitive tasks. From a developer's perspective, eliminating repetitive coding allows us to construct superior and more intricate applications.

AI bots are not the first technology to assist us in this endeavor. Instead, they represent the next phase in the advancement of application development, building upon previous achievements.

How much should developers trust ChatGPT?

Prior to AI-powered tools, developers would search on platforms like Google and Stack Overflow for code solutions, comparing multiple answers to find the most suitable one. With ChatGPT, developers specify the programming language and required functionality, receiving what the AI tool deems the best answer. This saves time by reducing the amount of code developers need to write. By automating repetitive tasks, ChatGPT enables developers to focus on higher-level concepts, resulting in advanced applications and faster development cycles.

However, there are caveats to using AI tools. They provide a single answer with no validation from other sources, unlike what you would see in a collective software development community, so developers need to validate any AI solution. In addition, because the tool is in beta stage, the code served by ChatGPT should still be evaluated and cross-checked before being used in any application.

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code.

Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems.

This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution.

The buck always stops with humans

With these potential security risks in mind, there are some important best practices to follow when using code generated by AI tools like ChatGPT. This involves checking the solution generated by ChatGPT against another source, like a community you trust, or friends. You should then make sure the code follows best practices for granting access to databases and other critical resources, following the principle of least privilege, secrets management, auditing and authenticating access to sensitive resources.

Make sure you double-check the code for any potential vulnerabilities and be aware of what you’re putting into ChatGPT as well. There is a question of how secure the information you put into ChatGPT is, so be careful when using highly sensitive inputs. Ensure you’re not accidentally exposing any personal identifying information that could run afoul of compliance regulations.

No matter how developers use ChatGPT in their work, when it comes to the safety of the code being produced the responsibility will always lie with humans. They cannot place blind faith in a machine that is ultimately just as liable to making mistakes as they are. To prevent potential issues, developers need to work closely with security teams to analyse how they’re using ChatGPT, and ensure that they’re adopting identity security best practices. Only then will they be able to reap the benefits of AI without putting security at risk.

We've featured the best AI writer.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

[ad_2]

Source link

Students sitting around a desk
Addressing the cloud skills gap

[ad_1]

As an industry underpinned by innovation, the technology sector can be its own worst enemy when it comes to skills. Cloud technologies are evolving more rapidly than previous generations, but workers aren’t being trained or upskilled quickly enough to meet this demand. As a result, the industry continues to face a widening skills gap, with just over half of UK businesses (57%) ranking hiring the right tech talent as their biggest challenge, according to TechUK’s latest Digital Economy Monitor Survey.

The skills required in this challenge are constantly evolving, too, thanks to the developments in areas such as cloud-native applications, microservices, Kubernetes, containers, and hybrid and multi-cloud environments. The World Economic Forum predicts that by 2027, 44% of workers will see their core skills disrupted because technology is moving too fast, and only half of workers will have access to adequate training. In a world where we rely heavily on our phones, the data they need and their underlying cloud infrastructure – addressing the cloud skills gap is vital for success – particularly within the likes of financial services, telecoms and manufacturing.

Cedric Gegout

VP of Product Management at Canonical.

Looking beyond degrees to gain quality talent

[ad_2]

Source link

Samsung Galaxy A35 5G, A55 5G may launch soon in India: What to expect
Samsung Galaxy A35 5G, A55 5G may launch soon in India: What to expect

[ad_1]

Samsung is reportedly working on multiple mid-range devices which are expected to launch under the company’s A-series and M-series. According to a report by GizmoChina, the Samsung Galaxy A35 5G and A55 5G smartphones have already been listed on India certification site BIS.
The report also adds that these upcoming models have also appeared on several other certification sites including the FCC and Bluetooth SIG.
These certifications suggest that the smartphones may soon launch in India as well as other markets. The recent listings have also revealed some of the expected specs of the upcoming devices.

Samsung Galaxy A35 5G and Galaxy A55 5G: What to expect

As per the report, the Samsung Galaxy A35 5G smartphone has also appeared on Geekbench benchmarks.
Here, the phone has been listed with the model number SM-A356U and may come powered by the Exynos 1380 chipset. This processor also powers last year’s Samsung smartphones including the Galaxy A54 and the Galaxy M54.
The report also suggests that the Galaxy A35 5G may pack 6GB of RAM and may run Samsung’s Android 14-based OneUI 6.1 skin.
Rumours also suggest that the Galaxy A35 may sport curved corners and a 6.6-inch display.

Meanwhile, TUV Rheinland and 3C certifications revealed that the Galaxy A55 5G smartphone may support 25W wired fast charging.
Rumours suggest that the Galaxy A55 5G smartphone may launch between April and June 2024. The smartphone may boast a 6.5-inch Full HD+ OLED display that is expected to support 120Hz refresh rate.
The Galaxy A55 5G smartphone is likely to come powered by the Exynos 1480 chipset which will be backed by an AMD GPU.
For photography, the smartphone may house a triple rear camera setup that is likely to include a 50MP main sensor, a 12MP ultra-wide angle lens and 5MP macro unit. The smartphone may also pack a 5,000mAh battery unit and feature a 32MP front camera for selfies and video calls.
Both smartphones are expected to launch before Q2 2024.



[ad_2]

Source link

Apple Working on Graphene Thermal System for Improved Heat Management on iPhone 16, Tipster Claims
Apple’s A18 Pro Leaked Benchmark Score Hints at Vastly Improved Single-Core Performance

[ad_1]

Apple’s iPhone 16 Pro and iPhone 16 Pro Max are expected to debut this year with a new A18 Pro chip under the hood, and the company’s purported mobile processor could bring notable performance upgrades over its predecessor — the A17 Pro — which powers the iPhone 15 Pro models, according to a leaked benchmark score. The A18 Pro could offer better single-core performance than Qualcomm’s purported Snapdragon 8 Gen 4 chip, while the latter is tipped to offer better multi-core performance.

According to a post on X (formerly Twitter) by user Nguyen Phi Hung (@negativeonehero), Apple’s hexa-core A18 Pro chip scores 3,500 points on the single-core test and 8,200 on the multi-core test on Geekbench 6. In comparison, the current-generation iPhone 15 Pro Max with an A17 Pro chip scores 2,898 points and 7,201 points on the single-core and multi-core tests, respectively.

 

a18 pro vs snapdragon 8 gen 4 twitter negativeonehero a18 pro

Photo Credit: Screenshot/ X (@negativeonehero)

 

Meanwhile, benchmarks for the purported Snapdragon 8 Gen 4 — expected to succeed the recently unveiled Snapdragon 8 Gen 3 chip by the end of 2024 — recently leaked by the same user show the processor scoring 2,845 points in the single-core test and a whopping 10,628 points in the multi-core test.

As it stands, the A18 Pro could offer a 20.77 percent increase in single-core performance over its predecessor, while the Snapdragon 8 Gen 4 chip might leapfrog Apple’s chip in multi-core performance. In fact, the leaked A18 Pro single-core Geekbench 6 score is higher than that of the 16-inch MacBook Pro (3,230 points) released in late 2023 with an M3 chip.

However, it’s too early to call these benchmarks reliable — the iPhone 16 Pro models aren’t expected until Q3 2024, while flagship Android phones with the rumoured Snapdragon 8 Gen 4 chip aren’t likely to debut until the last quarter. There’s also no way to verify if these benchmark scores are accurate, as they aren’t published on Geekbench’s website.

Until last year, Apple released two ‘Pro’ iPhone models with the latest mobile processor, while two models would be equipped with the chip from the previous year’s high-end iPhone. In 2023, Apple equipped the iPhone 15 models with a 4nm A16 Bionic processor, while the iPhone 15 Pro models came with the 3nm A17 Pro chip. It remains to be seen whether Apple releases the iPhone 16 models with a ‘Bionic’ processor while the Pro model gets the purported A18 Pro chip.


Affiliate links may be automatically generated – see our ethics statement for details.



[ad_2]

Source link

Jaq and one of her friends look surprised in Top Boy season 2 on Netflix
Say no more: Netflix in talks to make a Top Boy spinoff series

[ad_1]

Top Boy, the London-set crime drama that started off on Channel 4 and then moved to Netflix, was an incredible show – and when it ended after its third Netflix season I, like many fans, was absolutely gutted. The show was often compared favourably to The Wire, so it was a real shame when it didn’t return for another season.   

Its sharp writing, excellent cinematography and, most of all, amazing performances mean it’s a show I really miss. So I’m very excited to discover that Netflix may be making a spinoff. And not only that, but the show may focus on my very favourite character.

[ad_2]

Source link

Google One cloud storage service to cross 100 million users soon: CEO Sundar Pichai |
Google One cloud storage service to cross 100 million users soon: CEO Sundar Pichai |

[ad_1]

Google has announced its fourth quarter (Q4) 2023 earnings call and during the session, company CEO Sundar Pichai shared some details about its cloud storage service, Google One. Pichai said that the Google One cloud storage service is “just about to cross” 100 million subscribers. Google One was launched in 2018.
Pichai also noted that Google’s overall subscription business (which includes YouTube Premium and Music, YouTube TV, and Google One) is on an upward trajectory and has crossed $15 billion in annual revenues.
As per the company, its subscription business has grown by five times growth compared to 2019. Pichai noted that with the strong subscription performance, the company’s ‘Subscriptions, Platforms and Devices’ vertical registered a 23% growth year-on-year.

Google’s subscription performance in Q3 2024

In 2022, Pichai said that YouTube has 80 million paid subscribers. Since then, the company hasn’t provided the updated number of paid users that access its video streaming platform.

However, the company revealed some details about YouTube Shorts and shared similar numbers that were noted in Google’s Q3 2023 results. The company claimed that 2 billion signed-in users watch YouTube Shorts every month while the videos on the platform receive 7 billion daily plays.

Latest job cuts in Google

Google also laid off nearly 1,000 employees in multiple divisions earlier this month. The affected departments include hardware, engineering and services while 100 employees were laid off from YouTube. Later on, Pichai also shared an internal memo that noted more layoffs are expected throughout this year.
The company also mentioned that it currently has 182,502 employees which is more than the 182,381 employees it reported in the Q3 earnings release. However, the number is much lower than the 190,234 employees the company reported at the end of 2022.



[ad_2]

Source link

ChatGPT Leaking Private Chats and Login Credentials: What You Need to Know |
ChatGPT Leaking Private Chats and Login Credentials: What You Need to Know |

[ad_1]

ChatGPT is reportedly leaking private conversations. According to a recent report in ArsTechnica, the leaked details include login credentials and other personal details of unrelated users. The report shared screenshots submitted by the user whose account was allegedly hacked. The seven screenshots shared contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems they encountered while using the portal.
What is the leak
“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”
Besides the exact language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem is said to have occurred. The results are said to have appeared shortly after the user who goes by the name Whiteside had used ChatGPT for an unrelated query.
“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”
Other conversations leaked to Whiteside reportedly include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. As per the report, users for every leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Excat dates didn’t appear in the other conversations.
What OpenAI said on ChatGPT leaks
Commenting on the alleged leaks, OpenAI told the publication that the ChatGPT histories a user reported result from his ChatGPT account being compromised. The unauthorized logins came from Sri Lanka, an Open AI representative said. The user claimed that he logs into his account from Brooklyn, New York.
“From what we discovered, we consider it an account take over in that it’s consistent with activity we see where someone is contributing to a ‘pool’ of identities that an external community or proxy server uses to distribute free access,” the representative wrote. “The investigation observed that conversations were created recently from Sri Lanka. These conversations are in the same time frame as successful logins from Sri Lanka.”
What ChatGPT maker explanation
ChatGPT maker OpenAI’s explanation likely means the original suspicion of ChatGPT leaking chat histories to unrelated users may not be correct. However, it shows that the website that the website provides no mechanism for users such as Whiteside to protect their accounts using 2FA or track details such as IP location of current and recent logins. Such protections are there in mostv popular social platforms since years.



[ad_2]

Source link

Adobe throws XD overboard after losing out on $20bn Figma deal
Adobe throws XD overboard after losing out on $20bn Figma deal

[ad_1]

It’s all over for Adobe XD. After losing out in its $20bn bid for Figma, the firm is conceding defeat in the UI design arena – for now, at least. 

In the fallout of a deal that saw the company placed under the microscope of competition authorities in Europe and the US, Bloomberg is reporting that Adobe has confirmed it has “no plans to further invest” in its own UI/UX web design software

[ad_2]

Source link