Workers Still Input Sensitive Information When Using AI Tools Despite Security Risks

Are you one of them?

Employees, despite recognizing the risk, often overlook the potential leakage of sensitive data while using publicly available generative AI tools. These tools, including AI-driven platforms, pose significant threats by exposing customer information, financial data, and personally identifiable information like email addresses and phone numbers.

Unfortunately, many employees lack clear guidelines on the appropriate use of such tools, as highlighted by research from Veritas Technologies.

Risks of Generative AI Tools

Conducted by market researcher 3Gem, the study surveyed 11,500 employees worldwide, shedding light on the widespread use and associated risks of generative AI tools.

Concerns include potential data leaks (39%), production of inaccurate information (38%), compliance risks (37%), and reduced productivity (19%).

Frequency of Use and Purpose

Despite these risks, a significant percentage of employees (57%) utilize generative AI tools weekly, with 22.3% incorporating them into daily tasks. Common uses include research and analysis (42%), email and memo drafting (41%), and enhancing writing skills (40%), per ZDNet.

Data Types and Perceived Value

Employees acknowledge the value of various data types when entered into generative AI tools, including customer information (30%), sales figures (29%), financial data (28%), and personally identifiable information (25%). However, a notable portion (27%) doubts the business value of inputting sensitive information into these tools.

Benefits and Challenges

Respondents also recognize the benefits of generative AI, including faster access to information (48%), increased productivity (40%), and task automation (39%). However, many view a colleague's use of these tools as unfair (53%) and believe that users should share knowledge with the team (40%).

Policies and Guidelines

Despite the evident risks and benefits, a significant portion (36%) of employees report the absence of formal policies on the use of generative AI tools in the workplace. Only 24% have mandatory policies, while 12% enforce a ban on such tools.

Escalating Risks with Adoption

As the adoption of generative AI rises, so do security risks. According to IBM's X-Force Threat Intelligence Index 2024, the convergence of these technologies could lead to large-scale attacks, necessitating proactive security measures.

Identity-based threats, facilitated by generative AI, pose significant challenges, making robust security protocols imperative.

Global Trends in Cybersecurity

The study also highlights global trends in cybersecurity, emphasizing Europe's vulnerability due to increased ransomware attacks and data breaches.

Across regions, critical infrastructure organizations remain prime targets, underscoring the importance of patching, multi-factor authentication, and least-privilege principles in mitigating risks.

Securing the Future

In light of these challenges, businesses must adopt a holistic approach to security, safeguarding AI models and infrastructure against evolving threats. Through making cybersecurity measures as priority and promoting responsible use of generative AI, organizations can harness its transformative potential while minimizing associated risks.

While some people believe that AI is a no-no because of risks, billionaire Bill Gates appears unfazed about its potential impact.


ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics