ISOSOC2 Type 1SOC2 Type 2PCI DSSHIPAAGDPR

DeepSeek (深度求索), founded in 2023, is a Chinese company dedicated to making AGI a reality. Unravel the mystery of AGI with curiosity. Answer the essential question with long-termism. 🐋

DeepSeek AI A.I CyberSecurity Scoring

DeepSeek AI

Company Details

Linkedin ID:

deepseek-ai

Employees number:

129

Number of followers:

167,839

NAICS:

513

Industry Type:

Technology, Information and Internet

Homepage:

deepseek.com

IP Addresses:

Scan still pending

Company ID:

DEE_6226520

Scan Status:

In-progress

AI scoreDeepSeek AI Risk Score (AI oriented)

Between 0 and 549

https://images.rankiteo.com/companyimages/deepseek-ai.jpeg
DeepSeek AI Technology, Information and Internet
Updated:
  • Powered by our proprietary A.I cyber incident model
  • Insurance preferes TPRM score to calculate premium
globalscoreDeepSeek AI Global Score (TPRM)

XXXX

https://images.rankiteo.com/companyimages/deepseek-ai.jpeg
DeepSeek AI Technology, Information and Internet
  • Instant access to detailed risk factors
  • Benchmark vs. industry & size peers
  • Vulnerabilities
  • Findings

DeepSeek AI

Critical
Current Score
411
C (Critical)
01000
6 incidents
-102.75 avg impact

Incident timeline with MITRE ATT&CK tactics, techniques, and mitigations.

DECEMBER 2025
412
NOVEMBER 2025
491
Breach
11 Nov 2025 • DeepSeek
Risks and Impacts of Shadow AI in Corporate Environments

DeepSeek, a Chinese AI provider, suffered a **data breach** linked to unsanctioned AI use, where sensitive corporate or user data—potentially including PII, proprietary code, or internal documents—was exposed due to employees inputting confidential information into unapproved AI models (e.g., public chatbots). The breach stemmed from shadow AI practices, where third-party AI tools (like DeepSeek’s own or others) stored and processed data without adequate security controls, leading to unauthorized access or leaks. The incident aligns with risks highlighted in the article: employees bypassing IT policies to use AI tools, resulting in data being retained on external servers with weaker protections. The breach not only violated data protection regulations (e.g., GDPR-like standards) but also risked further exploitation, such as adversaries accessing the leaked data or the AI model itself being compromised to exfiltrate additional information. The financial and reputational fallout included regulatory fines, loss of trust, and potential operational disruptions, compounded by the challenge of tracing all exposed data.

407
critical -84
DEE3893138111125
Data Leakage Unauthorized AI Usage (Shadow AI) Compliance Violation Operational Risk Third-Party Risk
Employee use of unsanctioned AI tools (e.g., ChatGPT, Gemini, Claude) Browser extensions with embedded AI AI features in legitimate business software enabled without IT approval Agentic AI (autonomous agents acting without oversight) Malicious fake AI tools designed to exfiltrate data
Lack of visibility into employee AI tool usage Inadequate acceptable use policies for AI Absence of vendor security assessments for AI tools Unsecured digital identities for AI agents Software vulnerabilities in AI tools (e.g., backdoors, bugs)
Employee productivity gains (unintentional risk) Corporate inertia in adopting sanctioned AI tools Financial gain (by threat actors exploiting shadow AI)
Financial Loss: Up to $670,000 per breach (IBM estimate); potential compliance fines (e.g., GDPR, CCPA) Personally Identifiable Information (PII) Intellectual Property (IP) Proprietary Code Meeting Notes Customer/Employee Data Employee Devices (BYOD, laptops) Corporate Networks (via unauthorized AI agents) Business Software (AI features enabled without IT knowledge) Third-Party AI Servers (data storage in unregulated jurisdictions) Flawed decision-making due to biased/low-quality AI outputs Introduction of exploitable bugs in customer-facing products Potential corporate inertia or stalled digital transformation Brand Reputation Impact: High (due to data breaches, compliance violations, or flawed AI-driven decisions) Regulatory fines (e.g., GDPR, CCPA) Litigation from affected customers/employees Identity Theft Risk: High (if PII is shared with AI models or leaked)
Network monitoring to detect unsanctioned AI usage Restricting access to high-risk AI tools Developing realistic acceptable use policies for AI Vendor due diligence for AI tools Providing sanctioned AI alternatives Employee education on shadow AI risks Internal advisories on shadow AI risks Training programs for employees and executives Enhanced Monitoring: Recommended for detecting AI-related data leakage
PII (Customer/Employee) Intellectual Property Proprietary Code Corporate Meeting Notes Sensitivity Of Data: High (regulated data under GDPR, CCPA, etc.) Data Exfiltration: Potential (via AI model training or third-party breaches) Personally Identifiable Information: Yes (shared with AI models or leaked)
GDPR (General Data Protection Regulation) CCPA (California Consumer Privacy Act) Other jurisdiction-specific data protection laws
Shadow AI introduces significant blind spots in corporate security, exacerbating data leakage and compliance risks. Traditional 'deny lists' are ineffective; proactive policies and education are critical. Vendor due diligence for AI tools is essential to mitigate third-party risks. Employee awareness programs must highlight the risks of unsanctioned AI usage, including job losses and corporate inertia. Balancing productivity and security requires sanctioned AI alternatives and seamless access request processes.
Conduct a risk assessment to identify shadow AI usage within the organization. Develop and enforce an acceptable use policy tailored to corporate risk appetite. Implement vendor security assessments for all AI tools in use. Provide approved AI alternatives to reduce reliance on unsanctioned tools. Deploy network monitoring tools to detect and mitigate data leakage via AI. Educate employees on the risks of shadow AI, including data exposure and compliance violations. Establish a process for employees to request access to new AI tools. Monitor the evolution of agentic AI and autonomous agents for emerging risks.
['Ongoing (industry-wide trend, not a single incident)']
IT and security leaders should prioritize shadow AI as a critical blind spot. Executives must align AI adoption strategies with security and compliance goals. Employees should be trained on the risks of unsanctioned AI tools.
Employee-downloaded AI tools (e.g., ChatGPT, Gemini) Browser extensions with AI capabilities Unauthorized activation of AI features in business software Backdoors Established: Potential (via vulnerable AI tools or agents) Sensitive data stores (PII, IP, proprietary code) Corporate decision-making processes (via biased AI outputs)
Lack of visibility into employee AI tool usage Absence of clear acceptable use policies for AI Slow corporate adoption of sanctioned AI tools Inadequate vendor security assessments Employee frustration with productivity barriers Implement comprehensive AI governance frameworks. Enhance monitoring for unsanctioned AI usage. Foster a culture of security awareness around AI risks. Accelerate adoption of sanctioned AI tools to meet employee needs.
OCTOBER 2025
490
SEPTEMBER 2025
484
AUGUST 2025
479
JULY 2025
473
JUNE 2025
587
Breach
16 Jun 2025 • DeepSeek
Shadow AI Data Leakage and Privacy Risks in Corporate Environments (2024-2025)

In early 2025, researchers at Wiz uncovered a **vulnerable database operated by DeepSeek**, exposing highly sensitive corporate and user data. The breach included **chat histories, secret API keys, backend system details, and proprietary workflows** shared by employees via the platform. The leaked data originated from **shadow AI usage**—employees bypassing sanctioned tools to use DeepSeek’s consumer-grade LLM for tasks involving confidential spreadsheets, internal memos, and potentially trade secrets. While no direct financial fraud or ransomware was confirmed, the exposure of **authentication credentials and backend infrastructure details** created a severe risk of follow-on attacks, such as **spear-phishing, insider impersonation, or supply-chain compromises**. The incident highlighted the dangers of ungoverned AI adoption, where **ephemeral interactions with LLMs accumulate into high-value intelligence for threat actors**. DeepSeek’s database misconfiguration enabled attackers to harvest **years of prompt-engineered data**, including employee thought processes, financial forecasts, and operational strategies—effectively handing adversaries a **‘master key’ to internal systems**. Though DeepSeek patched the vulnerability, the breach underscored how **shadow AI expands attack surfaces silently**, with potential long-term repercussions for intellectual property theft, regulatory noncompliance (e.g., GDPR violations), and reputational damage. The exposure aligned with broader trends where **20% of organizations in an IBM study linked data breaches directly to unapproved AI tool usage**, with average costs exceeding **$670,000 per incident**.

464
critical -123
DEE5293552111725
Data Leakage Privacy Violation Shadow IT Risk AI Supply Chain Vulnerability Insider Threat (Unintentional)
Unauthorized AI Tool Usage (Shadow AI) Prompt Engineering Attacks (e.g., Slack AI exploitation) Misconfigured AI Databases (e.g., DeepSeek) Legal Data Retention Orders (e.g., OpenAI’s 2025 lawsuit) Social Engineering via AI-Generated Content (e.g., voice cloning, phishing)
Lack of AI Governance Frameworks Default Data Retention Policies in LLMs (e.g., OpenAI’s 30-day deletion lag) Employee Bypass of Sanctioned Tools Weak Authentication in AI Platforms Unmonitored Data Exfiltration via AI Prompts
Financial Gain (e.g., $243,000 scam via AI voice cloning in 2019) Corporate Espionage Data Harvesting for Dark Web Sales Disruption of Business Operations Exploitation of AI Training Data
Financial Loss: Up to $670,000 per breach (IBM 2025); Potential GDPR fines up to €20M or 4% global revenue Proprietary Code (e.g., Samsung 2023 incident) Financial Records (22% of UK employees use shadow AI for financial tasks) Internal Memos/Trade Secrets Employee Health Records Client Data (58% of employees admit sharing sensitive data) Chat Histories (e.g., DeepSeek’s exposed database) Secret Keys/Backend Details Corporate AI Tools (e.g., Slack AI) Third-Party LLMs (ChatGPT, Claude, DeepSeek) Enterprise Workflows Integrating Unsanctioned AI Legal/Compliance Systems (Data retention conflicts) Loss of Intellectual Property Erosion of Competitive Advantage Disruption of Internal Communications (e.g., AI-drafted memos leaking secrets) Increased Scrutiny from Regulators Revenue Loss: Potential 4% global revenue (GDPR fines) + breach costs Customer Complaints: Likely (due to privacy violations) Brand Reputation Impact: High (publicized breaches, regulatory actions) GDPR Noncompliance (Fines up to €20M) Lawsuits (e.g., New York Times vs. OpenAI 2025) Contractual Violations with Clients Identity Theft Risk: High (AI-generated impersonation attacks) Payment Information Risk: Moderate (22% use shadow AI for financial tasks)
Incident Response Plan Activated: Partial (e.g., Samsung’s 2023 ChatGPT ban) Wiz (DeepSeek vulnerability disclosure) PromptArmor (Slack AI attack research) IBM/Gartner (governance frameworks) Blanket AI Bans (e.g., Samsung 2023) Employee Training (e.g., Anagram’s compliance programs) AI Runtime Controls (Gartner 2025 recommendation) Centralized AI Inventory (IBM’s lifecycle governance) Penetration Testing for AI Systems Network Monitoring for Unauthorized AI Usage 30-Day Data Deletion Policies (OpenAI’s post-lawsuit commitment) AI Policy Overhauls Ethical AI Usage Guidelines Incident Response Playbooks for Shadow AI Public Disclosures (e.g., OpenAI’s transparency reports) Employee Advisories (e.g., Microsoft’s UK survey findings) Stakeholder Reports (e.g., IBM’s Cost of Data Breach 2025) Network Segmentation: Recommended (IBM/Gartner) Enhanced Monitoring: Recommended (e.g., tracking unauthorized AI tool usage)
Chat Histories Proprietary Code Financial Data Internal Documents Secret Keys Backend System Details Employee/Patient Health Records Trade Secrets Number Of Records Exposed: Unknown (potentially millions across affected platforms) Sensitivity Of Data: High (includes PII, financial, proprietary, and health data) Data Exfiltration: Confirmed (e.g., DeepSeek, Slack AI, Shadow AI leaks) Data Encryption: Partial (e.g., OpenAI encrypts data at rest, but retention policies create risks) Text (prompts/outputs) Spreadsheets (e.g., confidential financial data) Code Repositories Audio (e.g., voice cloning samples) Internal Memos Personally Identifiable Information: Yes (employee/client records, health data)
GDPR (Article 5: Data Minimization) CCPA (California Consumer Privacy Act) Sector-Specific Regulations (e.g., HIPAA for health data) Fines Imposed: Potential: Up to €20M or 4% global revenue (GDPR) New York Times vs. OpenAI (2025, data retention lawsuit) Unspecified lawsuits from affected corporations Likely required under GDPR/CCPA for breaches OpenAI’s court-mandated data retention (2025, later reversed)
Shadow AI is pervasive (90% of companies affected, per MIT 2025) and often invisible to IT teams. Employee convenience trumps compliance (58% admit sharing sensitive data; 40% would violate policies for efficiency). AI governance lags behind adoption (63% of organizations lack frameworks, per IBM 2025). Legal risks extend beyond breaches: data retention policies can conflict with lawsuits (e.g., OpenAI 2025). AI platforms’ default settings (e.g., 30-day deletion lags) create unintended compliance gaps. Prompt engineering attacks can bypass traditional security controls (e.g., Slack AI leak). Silent breaches are more damaging: firms may not realize data is compromised until exploited (e.g., AI-generated phishing).
Implement AI runtime controls and network monitoring for unauthorized tool usage. Deploy centralized inventories to track AI models/data flows (IBM’s lifecycle governance). Enforce strict data retention policies (e.g., immediate deletion of temporary chats). Conduct penetration testing for AI systems and prompt injection vulnerabilities. Use adaptive behavioral analysis to detect anomalous AI interactions. Develop clear AI usage policies with tiered access controls (e.g., ban high-risk tools like DeepSeek). Mandate regular training on shadow AI risks (e.g., Anagram’s compliance programs). Align AI governance with GDPR/CCPA requirements (e.g., data minimization by design). Establish incident response playbooks specifically for AI-related breaches. Foster innovation while setting guardrails (Gartner’s 2025 approach: ‘harness shadow AI’). Encourage reporting of unauthorized AI use without punishment (to reduce hiding behavior). Involve employees in vetting AI tools for enterprise adoption (Leigh McMullen’s suggestion). Highlight real-world consequences (e.g., $243K voice-cloning scam) in awareness campaigns. Treat AI as a critical third-party risk (e.g., vendor assessments for LLM providers). Budget for AI-specific cyber insurance to cover shadow AI breaches. Collaborate with regulators to shape AI data protection standards. Monitor dark web for leaked AI-trained datasets (e.g., employee prompts sold by initial access brokers).
Ongoing (industry-wide; no single investigation)
Corporate Clients: Demand transparency from AI vendors on data handling/retention. End Users: Avoid sharing sensitive data with consumer AI tools; use enterprise-approved alternatives. Partners: Include AI data protection clauses in contracts (e.g., right to audit LLM interactions).
CISOs: Prioritize AI governance frameworks and employee training. Legal Teams: Audit AI data retention policies for compliance conflicts. HR: Integrate AI usage into acceptable use policies and disciplinary codes. Board Members: Treat shadow AI as a top-tier enterprise risk.
Employee Use of Unsanctioned AI Tools Misconfigured AI Databases (e.g., DeepSeek) Prompt Injection Attacks (e.g., Slack AI) Legal Data Retention Orders (e.g., OpenAI 2025) Reconnaissance Period: Ongoing (years of accumulated prompts in some cases) Backdoors Established: Potential (e.g., AI-trained datasets sold on dark web) Financial Forecasts Product Roadmaps Legal Strategies M&A Plans Employee Health Records Data Sold On Dark Web: Likely (e.g., chat histories, proprietary data)
Lack of AI-Specific Governance (63% of orgs per IBM 2025). Over-Reliance on Employee Compliance (58% admit policy violations). Default Data Retention in LLMs (e.g., OpenAI’s 30-day deletion lag). Inadequate Vendor Risk Management for AI Tools. Cultural Prioritization of Convenience Over Security (71% UK employees use shadow AI). Technical Gaps: No Runtime Controls for AI Interactions. Mandate AI Lifecycle Governance (IBM’s 4-pillar framework). Deploy AI Firewalls to Block Unauthorized Tools. Enforce ‘Zero Trust’ for AI: Verify All Prompts/Outputs. Conduct Red-Team Exercises for Prompt Injection Attacks. Partner with AI Vendors for Enterprise-Grade Controls (e.g., private LLMs). Establish Cross-Functional AI Risk Committees (IT, Legal, HR).
MAY 2025
585
APRIL 2025
581
MARCH 2025
577
FEBRUARY 2025
577
Vulnerability
01 Feb 2025 • DeepSeek
DeepSeek Data Leak

DeepSeek, a generative AI platform, faced heightened concerns over privacy and security as it stores user data on servers in China. Security researchers discovered that DeepSeek exposed a critical database online, leaking over 1 million records, including user prompts, system logs, and API authentication tokens. The leaked information could lead to unauthorized access and misuse of user data, posing serious privacy and security risks. Furthermore, the platform's safety protections were found to be lacking when tested against various jailbreaks, illustrating a potential vulnerability to cyber threats.

570
critical -7
DEE001021525
Data Leak
Exposed Database
Improper Database Security
user prompts system logs API authentication tokens
user prompts system logs API authentication tokens Number Of Records Exposed: 1 million
JANUARY 2025
770
Breach
01 Jan 2025 • DeepSeek
DeepSeek Data Leak via Publicly Accessible ClickHouse Database

In January 2025, Chinese AI specialist **DeepSeek** suffered a critical data leak exposing over **1 million sensitive log streams**, including **chat histories, secret keys, and internal operational data**. The breach stemmed from a **publicly accessible ClickHouse database** with misconfigured access controls, granting unauthorized parties **full administrative privileges**—enabling potential data exfiltration, manipulation, or deletion. While Wiz Research promptly alerted DeepSeek, which secured the exposure, the incident highlighted vulnerabilities in **cloud storage misconfigurations** and **endpoint security**. The leaked data posed risks of **intellectual property theft, credential compromise, and regulatory non-compliance** (e.g., GDPR/CCPA fines). Given the scale and sensitivity of the exposed logs—likely containing **proprietary AI model interactions and authentication tokens**—the breach could undermine **customer trust, competitive advantage, and operational integrity**, with potential downstream effects like **fraud, reputational damage, or supply chain attacks**. The root cause aligned with **unintentional leakage** via **misconfigured infrastructure**, though insider threats or targeted exploitation remained plausible secondary risks.

573
critical -197
DEE456090325
Data Leak
Misconfigured Cloud Storage (Publicly Accessible ClickHouse Database) Potential Insider Threats (Unconfirmed) Potential Phishing/Social Engineering (Unconfirmed)
Improper Access Controls (Publicly Accessible Database)
Chat History Secret Keys Log Streams (1M+ records) ClickHouse Database Operational Impact: High (Exposure of Sensitive Internal Data) Brand Reputation Impact: Potential Long-Term Damage (Unquantified) Potential GDPR Fines (EU) Potential CCPA Fines (California) Identity Theft Risk: High (Exposure of Secret Keys) Payment Information Risk: Potential (If Secret Keys Included Payment-Related Credentials)
Incident Response Plan Activated: Yes (Prompt Securing of Database by DeepSeek) Third Party Assistance: Yes (Wiz Research Reported the Issue) Securing the Publicly Accessible Database
Log Streams Chat History Secret Keys Number Of Records Exposed: 1,000,000+ Sensitivity Of Data: High (Includes Authentication Credentials and Internal Communications) Data Encryption: No (Data Was Publicly Accessible) Log Files Potential Configuration Files
Potential GDPR (EU) Potential CCPA (California)
Importance of Least-Privilege Access Controls Need for Regular Audits of Cloud Configurations Risks of Publicly Accessible Databases Value of Third-Party Security Research (e.g., Wiz Research) Criticality of Data Classification and DLP Solutions
Enforce Least-Privilege Access Policies Implement Data Loss Prevention (DLP) Solutions Classify Sensitive Data and Prioritize Protection Conduct Regular Internal/External Security Audits Provide Comprehensive Employee Security Training Monitor for Shadow IT and Unauthorized Data Sharing Use Tools Like Outpost24’s CompassDRP for Leak Detection
['Resolved (Database Secured)']
Misconfigured ClickHouse Database (Publicly Accessible) Inadequate Access Controls Lack of Monitoring for Unauthorized Access Secured the Database Likely Reviewed Access Controls (Assumed) Potential Implementation of DLP or Monitoring Tools (Assumed)
Cyber Attack
01 Jan 2025 • DeepSeek
DeepSeek Data Privacy Incident

DeepSeek, a Chinese AI research lab, is under scrutiny for potentially compromising user data privacy. Recently popularized for its generative AI model, DeepSeek experienced a large-scale malicious attack causing limitation of new sign-ups. Concerns have been raised due to its policy of sending user data, including conversations and queries, to servers located in China. Incidents of censorship regarding content critical of China have been reported, raising the question of the extent of data privacy initiatives by DeepSeek. The company's data practices exemplify the challenges facing users around data privacy and the control companies hold over personal information.

573
critical -197
DEE000012825
Data Privacy Incident
Large-scale malicious attack
User data Conversations Queries Limitation of new sign-ups
User data Conversations Queries
Breach
01 Jan 2025 • DeepSeek
DeepSeek Database Exposure

DeepSeek's database was left exposed on the internet, leaking over 1 million records, including system logs, user submissions, and API tokens. Due to the database's nature as an analytics type, the breach of user interaction data and authentication keys poses a significant risk to user privacy. The issue was resolved within 30 minutes after Wiz researchers attempted to notify the company, by which time the database was secured.

573
critical -197
DEE000013125
Data Leak
Exposed Database
Misconfiguration
System logs User submissions API tokens Systems Affected: Database
Third Party Assistance: Wiz researchers Containment Measures: Secured the database
System logs User submissions API tokens Number Of Records Exposed: Over 1 million Sensitivity Of Data: User interaction data and authentication keys
Root Causes: Misconfiguration

Frequently Asked Questions

According to Rankiteo, the current A.I.-based Cyber Score for DeepSeek AI is 411, which corresponds to a Critical rating.

According to Rankiteo, the A.I. Rankiteo Cyber Score for November 2025 was 491.

According to Rankiteo, the A.I. Rankiteo Cyber Score for October 2025 was 490.

According to Rankiteo, the A.I. Rankiteo Cyber Score for September 2025 was 484.

According to Rankiteo, the A.I. Rankiteo Cyber Score for August 2025 was 479.

According to Rankiteo, the A.I. Rankiteo Cyber Score for July 2025 was 473.

According to Rankiteo, the A.I. Rankiteo Cyber Score for June 2025 was 587.

According to Rankiteo, the A.I. Rankiteo Cyber Score for May 2025 was 585.

According to Rankiteo, the A.I. Rankiteo Cyber Score for April 2025 was 581.

According to Rankiteo, the A.I. Rankiteo Cyber Score for March 2025 was 577.

According to Rankiteo, the A.I. Rankiteo Cyber Score for February 2025 was 570.

According to Rankiteo, the A.I. Rankiteo Cyber Score for January 2025 was 573.

Over the past 12 months, the average per-incident point impact on DeepSeek AI’s A.I Rankiteo Cyber Score has been -102.75 points.

You can access DeepSeek AI’s cyber incident details on Rankiteo by visiting the following link: https://www.rankiteo.com/company/deepseek-ai.

You can find the summary of the A.I Rankiteo Risk Scoring methodology on Rankiteo by visiting the following link: Rankiteo Algorithm.

You can view DeepSeek AI’s profile page on Rankiteo by visiting the following link: https://www.rankiteo.com/company/deepseek-ai.

With scores of 18.5/20 from OpenAI ChatGPT, 20/20 from Mistral AI, and 17/20 from Claude AI, the A.I. Rankiteo Risk Scoring methodology is validated as a market leader.