The Real Concern About DeepSeek AI
The peanut gallery has a lot to say about artificial intelligence, and I am sure you’ve heard about their latest fear-mongering darling, DeepSeek AI. Many concerns stem from the lack of transparency around privacy and the potential nefarious behavior of the China-based company (sound like another Chinese platform?). Here’s the thing: they’re not wrong to send a warning flare about how the AI technology handles data, but where some are drawing parallels to TikTok is where my concern ends.
TikTok is a platform intended for entertainment, and everything you put on it is meant to be seen by the entire world. Your user data and preferences are openly monitored and translated into an algorithm. Where TikTok is a platform that is intentionally public, AI tools like DeepSeek are inherently private. Users are much more likely to upload confidential information or data to the AI technology than TikTok, and that is where my concern lies. Having spent a significant part of my career in military intelligence, I have every expectation that the data we feed into any version of an AI tool hosted in China could be used by their government for purposes we won’t find very appealing. With more and more companies and individuals adopting AI tools as part of their workflow, there is an increased opportunity for industrial espionage, with sensitive information being freely uploaded.
While I have some confidence in U.S. government regulation and laws that protect consumers (it’s never perfect, but it’s generally pretty good), those same assurances do not exist when you consider China. The outcry around DeepSeek security is warranted in this case, but personally, I would not use any AI tool for sensitive or confidential research without a transparent understanding of where the information I’m providing is going.
Before you write off the technology completely, I will point out that DeepSeek’s technical achievements are impressive, especially when considering their constraints. The DeepSeek team developed innovative techniques to accelerate AI reasoning and performance, and by releasing their work as open source, they’ve allowed developers to examine and potentially run the model locally. This kind of open research sharing often leads to broader advancements in the field—in fact, it’s likely that other AI companies will incorporate some of DeepSeek’s innovations into their own products.
Unfortunately, if you back up and examine the company’s claims that they built their model on a budget of 5-6 million dollars, you are again met with a concerning lack of transparency from them. While they may claim they built this level of AI technology on a very slim budget, the sophistication and advancement that the platform demonstrates leads me to believe that they likely “distilled” knowledge from larger AI models such as OpenAI, which has spent billions creating their platform. In other words, it’s not an apples-to-apples comparison.
Ultimately, the clamor and fear-mongering around DeepSeek is an excellent reminder for all to proceed with caution when it comes to any new technology. The challenge is that with AI advancing so rapidly, we’re dealing with unintended consequences we haven’t even thought of… yet. Personally, I don’t plan on using the platform, not just due to security concerns but also because I have not seen sufficient evidence of the model meaningful outperforming others. Plus, the nominal cost savings aren’t enough to make me seriously consider it. That does not necessarily mean it will stay this way—technology has a sneaky way of accelerating when you’re least expecting it. But for now, I’ll stick with the models that do a better job of transparently disclosing and demonstrating the billions of dollars of investment into their technology, security, and innovation.
#DeekSeek #OpenAI #Cybersecurity #AI