OpenAI is bolstering its security measures to safeguard its intellectual property from corporate espionage, as reported by the Financial Times. The company has undertaken a comprehensive overhaul of its security operations.
This revamp includes the implementation of ‘information tenting’ policies, which restrict employee access to newly developed algorithms. As part of these enhanced security protocols, employees are now required to undergo fingerprint scans to gain entry to specific rooms.
Additionally, OpenAI has introduced a “deny-by-default egress policy” that prevents unauthorized internet connections for model weights, further fortifying its defenses. These actions have been taken in response to allegations that Chinese AI startup DeepSeek replicated OpenAI’s models using distillation techniques.
Earlier reports suggested that Microsoft security researchers suspected individuals linked to DeepSeek were extracting substantial data via OpenAI’s API. OpenAI confirmed to the Financial Times that it observed “some evidence of distillation.” The company had previously initiated a requirement for government ID verification for developers seeking access to advanced AI algorithms.
The concerns surrounding DeepSeek’s activities have been heightened by its open-source R1 reasoning model, which is comparable to OpenAI’s o1 model but offered at a lower cost. This development has fueled apprehensions about the competitive threat posed by Chinese AI models.




