The power of generative artificial intelligence can already be scary, but for it to not have any form of protection or safeguards is a major problem that DeepSeek is now dealing with. Its lack of necessary safeguards has reportedly made it a favorite for people with malicious intent to exploit.
DeepSeek AI: No Safeguards, Protection Says Analysts
A report by Israeli research firm ActiveFence (via YNetNews.com) revealed that the Chinese AI startup is severely lacking in significant points of their operations which may lead to serious problems in the future. First off, the team found that DeepSeek's AI does not have any form of safeguards despite its massive operations all over the globe.
The team reported in their latest findings that the company has no internal or external protections in place for user accounts, and it may be used for the wrong reasons by people who wish to take advantage of it.
It was revealed that DeepSeek is unlike its Western counterparts like OpenAI, Google, Perplexity, and more who have set guidelines and policies regarding their use. Its lack of policies and safeguards is a significant problem to AI and non-AI users alike.
Criminals Can Exploit DeepSeek's AI Experiences
According to the report, there is a possibility that criminals may exploit DeepSeek's services if they want to, using their technology to run scams that may trick the public into different scenarios.
ActiveFence tested DeepSeek's V3 artificial intelligence and focused on dangerous prompts. It found that it shared harmful responses by as much as 38%.
The Crime of Using Generative AI in This Age
Despite AI being an almost all-knowing and all-powerful technology available, not every thinkable topic is available for users to discuss with the machine. The rise of crimes involving artificial intelligence has significantly risen, and this has been actively thwarted by the authorities, AI companies, and other concerned groups.
Last year saw the growth of the wrong use of artificial intelligence, with many bad actors resorting to advanced technology to create deepfakes of renowned personalities to mislead the public into propagandistic beliefs and more. However, others used deepfakes to create more heinous online campaigns like AI-generated pornography, which victimized minors.
At present, some governments are working on legislation and regulations that would prevent the misuse of artificial intelligence among users.