DeepSeek, a Chinese startup, has launched AI models that it claims rival or surpass leading U.S. models at a fraction of the cost, according to Reuters.
Their DeepSeek-V3 model required less than $6 million in computing power and has propelled their AI Assistant to the top of Apple's App Store in the U.S., surpassing ChatGPT. This has led to doubts about the high investments by U.S. tech companies and affected shares of major players like Nvidia.
DeepSeek's models, DeepSeek-V3 and DeepSeek-R1, are praised for their quality and cost efficiency, challenging the previous perception of a gap between U.S. and Chinese AI capabilities. However, some skepticism remains about their success, with allegations of undisclosed advanced AI chips and higher training costs than reported.
“DeepSeek has quickly risen to the top of app stores worldwide, attracting thousands of users curious about its new AI model. With its advanced capabilities and a more affordable subscription price, it’s no surprise that people are eager to explore what it can do. However, while innovation in AI is undoubtedly exciting, it’s more important than ever to approach these tools with caution.
This accessibility comes with a significant cost: our privacy. According to DeepSeek’s privacy policy, the app data is stored in servers in China, and this might include the chat history, including text, audio, prompts, and any files uploaded, such as images or PDFs. For those who choose to use the app, avoid uploading private or sensitive information, whether personal or related to your work. Remember, even seemingly mundane queries about health issues, political questions, or daily problems could be exposed. It might sound like old advice, but caution when sharing personal information does apply to all platforms where we disclose private information, including Open AI's ChatGPT. However, as DeepSeek stores data in China, it raises more questions about how it can be used or monitored without consent.
This combination of accessibility and advanced functionality also poses new risks. Unlike proprietary models, DeepSeek reasoning model, R1, enables greater accessibility but also makes it vulnerable to manipulation. Malicious actors, including cybercriminals and state-sponsored groups, could exploit this advanced reasoning capability to optimize their malicious code, engineer harmful devices, or generate unrestricted deepfake content. Ultimately, these services can be used to manipulate public opinion, and shift political thinking in inconspicuous ways - as it is with social media, in the wrong hands, they can become weapons of mass destruction.”
by Filip Mazan, ESET
ESET does not bear any responsibility for the accuracy of this information.