Introduction
I previously posted about downloading and running DeepSeek R1 in Ollama. There is a big question about DeepSeek’s security, safety, and legal usage outside of China. I am sharing my opinion and some relevant links on this topic.
Is it secure?
When working with GenAI and tools such as chatGPT or DeepSeek R1, we are generally concerned that our privacy is preserved. Who can access our data? Is using DeepSeek R1 secure? Is the model output provided correct?
Jailbreaking
According to KELA, DeepSeek R1 Exposed: Security Flaws in China’s AI Model, DeepSeek R1 is highly vulnerable to “jailbreaking,” allowing malicious users to bypass safety features and produce harmful content. This includes generating instructions for illegal activities, creating dangerous materials, and fabricating sensitive information [2].
Data Storage and Privacy
DeepSeek stores user data on servers in China, raising privacy concerns for Western users due to differing data protection regulations. China’s laws may require sharing user data with the government, potentially compromising user privacy.
No opt-out?
KELA’s tests advise caution in adopting DeepSeek, a Chinese AI company with data-sharing obligations and unclear opt-out for user input retention [2]. The model also has significant safety vulnerabilities. Organizations prioritizing privacy and security should carefully assess AI-related risks before using public generative AI applications [2].
So, using tools such as DeepSeek R1 or potentially other similar software poses serious privacy and security risks. We must stay informed about the latest security assessments of DeepSeek R1. What can we do about it?
It is paramount to be careful with the information you share with DeepSeek R1, especially sensitive data. We should not share confidential information with the model.
Additionally, I suggest running DeepSeek R1 locally, as explained in this post. We may not totally avoid possible data leakage and misinformation. However, running DeepSeek R1 locally is much safer than using the website version. For better protection, you might run it offline :)
Discussion
However, let’s think outside the box and consider that many GenAI products pose similar risks, potentially not lesser, than DeepSeek R1. We don’t have any control, and there is no transparency in place yet that we can be sure our data is not shared by other AI tools.
Besides, creating AI now is possible even on a smaller budget. Is that not great and fostering innovation? Is the AI bubble about to burst?
Conclusion
There is no totally secure AI tool or software product. We must, however, be cautious about how we use them, how our data is transferred, and where it is stored. The possible security risks in AP apps cannot be totally mitigated yet. However, use them carefully and do not share personal information with bots.
Good luck, but remember to have fun in the first instance. The AI regulations are being developed. However, at least today, no one can protect you from a software security threat. In the end, we can always learn from our mistakes :)
Did you like this post? Please let me know if you have any comments or suggestions.
Posts about AI that might be interesting for youReferences
About Elena Elena, a PhD in Computer Science, simplifies AI concepts and helps you use machine learning.
|