Global investment bank Goldman Sachs has taken the significant step of blocking its Hong Kong-based employees from using Anthropic's AI model, Claude. The decision was made following a detailed internal review of its contractual agreement with the prominent AI startup. This move highlights the increasingly complex landscape where corporate technology adoption intersects with international geopolitics and stringent regulatory compliance.
Goldman Sachs Restricts AI Access in Hong Kong
The restriction on Claude was implemented in recent weeks after the bank adopted a very strict interpretation of its service agreement with Anthropic. Following direct consultations with the AI developer, Goldman Sachs concluded that its staff in Hong Kong should be barred from using any of Anthropic's products. This preventative measure was taken despite the continued availability of other major AI models like ChatGPT and Gemini on the bank's internal platform.
This development is particularly noteworthy given that Goldman Sachs had announced a collaboration with Anthropic earlier in February to build AI agents for internal tasks. The bank's cautious approach in Hong Kong, a region largely outside mainland China's strict controls on US-built AI, suggests a proactive stance on compliance. It reflects a desire to avoid any potential breach of contract related to complex ownership and control clauses.
Anthropic's Geopolitically-Aware Usage Policy
At the heart of the issue is Anthropic's specific policy, which links AI access to corporate ownership as much as to the user's physical location. The company restricts services for organizations that are more than 50% owned by companies based in unsupported regions, including China. This policy is explicitly designed to mitigate potential national security risks associated with authoritarian governments.
Anthropic's stated goal is to prevent situations where firms controlled from certain regions might face legal or political pressure to share user data or assist intelligence services. In practice, this means access to its powerful AI models can depend more on who ultimately controls a business than on where its employees are based. Goldman Sachs' action appears to align with a careful reading of these terms to ensure full compliance.
The Widening Impact on Global Technology Deployment
The challenges faced by Goldman Sachs are not unique and signal a broader trend for multinational corporations deploying new technologies. Rolling out AI across different international markets is becoming increasingly tangled in a web of specific contract terms, export controls, and national security rules. This complexity requires firms to conduct thorough due diligence on their technology partners and internal structures.
The ripple effects of such restrictions extend beyond the financial sector, impacting other technology products with ties to Chinese ownership. For instance, some users of ByteDance-backed AI coding tools that utilize Anthropic's models have reportedly faced similar issues. These limitations may also create an opening for local AI rivals in restricted markets to capture customers unable to access leading US-based models.
In conclusion, the decision by Goldman Sachs to curtail the use of Claude in Hong Kong is a clear indicator of the new operational realities in the global technology sphere. It demonstrates that corporate strategy for AI adoption must now account for intricate geopolitical factors and nuanced contractual obligations. As nations continue to define their stances on AI, multinational firms will face growing pressure to navigate this complex and rapidly evolving regulatory terrain.

