OpenAI, the company behind the widely used ChatGPT, is grappling with dual security challenges this week. These challenges raise significant concerns about its handling of user data and internal security protocols. 

US-TECHNOLOGY-AI-ALTMAN
(ILLUSTRATION) This illustration photo produced in Arlington, Virginia on November 20, 2023, shows a smart phone screen displaying the logo of OpenAI juxtaposed with a screen showing a photo of former OpenAI CEO Sam Altman attending the Asia-Pacific Economic Cooperation (APEC) Leaders' Week in San Francisco, California, on November 16, 2023.
(Photo : OLIVIER DOULIERY/AFP via Getty Images)

OpenAI's ChatGPT Mac App Faces Security Issues

According to Engadget, the first issue revolves around the Mac application of its popular AI-powered chatbot ChatGPT. Developer Pedro José Pereira Vieito recently uncovered a troubling discovery: the Mac app was storing user conversations locally in plain text without any encryption. 

This practice poses serious security risks, as sensitive data could be accessed by other applications or malicious software, unlike apps distributed through Apple's App Store, which must adhere to stringent sandboxing requirements for added security, OpenAI's direct distribution allowed for this oversight. 

Following public attention and coverage by The Verge, OpenAI swiftly responded by releasing an update introducing encryption for locally stored chats and addressing the initial vulnerability.

It is worth noting that sandboxing is a security technique used to isolate applications and prevent possible risks or failures from spreading across a system. It creates a controlled environment, often called a sandbox, where an application runs with restricted access to system resources and limited interactions with other software. 

This approach helps mitigate the impact of malicious actions or software bugs by containing them within a confined space, thus protecting the overall system and its data from unauthorized access or damage.

Read Also: OpenAI Not Getting Paid for Apple ChatGPT Integration, Companies Consider Exposure, Wider Rollout As Bigger Wins

OpenAI Experienced Massive Breach

The second security issue dates back to 2023 when OpenAI faced a massive breach. A hacker successfully infiltrated the company's internal messaging systems, obtaining sensitive information.

The New York Times reported that Leopold Aschenbrenner, a technical program manager at OpenAI, raised alarms within the company's board regarding these security vulnerabilities. 

Aschenbrenner's concerns centered on potential exploits by foreign adversaries, leveraging weaknesses in OpenAI's internal defenses. His efforts to raise these issues led to internal friction, culminating in his departure from the company.

OpenAI, however, disputed claims that his termination was linked to whistleblowing, asserting that his departure was not retaliatory but based on other factors. App security issues are a common challenge in the tech industry, often accompanied by breaches from malicious actors.

The widespread adoption of ChatGPT across diverse platforms and the internal security hurdles faced by OpenAI underscore broader concerns about the company's capacity to ensure the security of user data and maintain robust cybersecurity practices. 

These developments highlight critical questions regarding OpenAI's ability to safeguard sensitive information and mitigate potential risks effectively. 

Related Article: OpenAI Transcription Tool Whisper Found to Generate Harmful, Violent Text Due to Hallucinations, Study Finds

Byline

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion