Australia's Caution Regarding DeepSeek's AI Model

4 min read

The Australian government has recently advised caution in using DeepSeek’s generative AI model, a product of one of China’s most promising AI startups. DeepSeek’s advanced technology has quickly gained attention for its performance, but it has also raised concerns about data usage and privacy that cannot be ignored. As companies look to integrate cutting-edge AI solutions, it is crucial to understand the specific risks associated with this model and how to mitigate them.

Overview of Australia's Advisory on DeepSeek

Australia’s advisory stems from concerns over how DeepSeek’s AI handles sensitive information. The model’s data usage policies have been the subject of scrutiny, particularly in the context of international data transfers and potential access by foreign entities. This advisory serves as a reminder that even highly sophisticated AI tools come with complex regulatory and security considerations. By understanding the underlying concerns, businesses can make more informed decisions about whether and how to adopt such technology.

Assessing Security and Privacy Risks in AI Adoption

One of the most significant issues highlighted by the advisory is DeepSeek’s use of keystroke recording. While keystroke data is often gathered to refine input mechanisms and enhance user experience, it can introduce serious risks. Keystroke patterns—how a person types, including the rhythm, speed, and pressure used—constitute a form of behavioural biometrics.

Unlike passwords, which are fixed pieces of information, behavioural biometrics are unique, continuously generated characteristics tied directly to a user’s interactions. In other words, the way someone types is like a fingerprint—it’s nearly impossible to replicate exactly, and it can be used to distinguish one individual from another.

If this type of data is not properly safeguarded, it could be exploited in several ways. For example, cybercriminals could use detailed keystroke data to construct profiles of individuals, potentially deducing sensitive information over time. This might include predicting a user’s passwords based on repeated patterns, identifying security questions, or even discerning other personal details that could lead to fraud or unauthorised account access.

Additionally, behavioural biometrics can serve as a second layer of authentication for many systems. If this data is compromised, it could undermine the security of platforms that rely on behavioural patterns to verify a user’s identity. In extreme cases, this could pave the way for identity theft, whereby a malicious actor uses stolen behavioural data to impersonate the victim, gain access to sensitive accounts, or conduct fraudulent transactions.

DeepSeek’s privacy policy mention the use of data for model training and refinement, but it is the lack of clarity around keystroke logging practices that has alarmed regulators. This issue underscores the need for businesses to carefully review the terms of any AI provider before integrating their tools, ensuring that data handling practices are transparent and comply with local privacy standards.

Guidelines for Safe Integration of Foreign AI Solutions

For organisations considering DeepSeek’s AI model—or any foreign AI solution—there are several best practices to follow:

  • Conduct a Thorough Privacy Impact Assessment: Assess how data will be handled, stored, and processed, especially if the data may leave Australia’s jurisdiction. This step helps ensure compliance with local regulations and standards.
  • Review and Clarify Data Usage Terms: Examine the provider’s terms of service carefully, paying special attention to clauses around data retention, user inputs, and additional tracking methods like keystroke recording. If terms seem ambiguous or pose potential risks, seek clarification from the provider.
  • Implement Robust Access Controls and Security Measures: If using an AI model with known data collection practices, ensure that your organisation has stringent access controls and encryption protocols in place. Minimising access to sensitive data and ensuring it is stored securely can reduce the risks associated with biometric vulnerabilities.
  • Stay Informed on Evolving Regulations: AI governance is rapidly developing both in Australia and abroad. By staying up to date on legislative changes, companies can anticipate compliance requirements and adjust their practices accordingly.

Conclusion

DeepSeek’s advanced AI capabilities are undoubtedly impressive, but the concerns surrounding its data usage and keystroke recording practices highlight the importance of due diligence. Australian businesses must weigh the benefits of cutting-edge AI against potential security and privacy risks. By carefully reviewing terms, implementing robust safeguards, and staying informed, organisations can navigate the challenges of integrating foreign AI solutions while maintaining the trust and security of their users.

 
02 9037 0851