Anthropic, a prominent AI company valued at $61.5 billion, has reversed its earlier stance on the use of artificial intelligence (AI) by job applicants, now permitting job seekers to leverage AI in its hiring process.
Despite this shift, strict guidelines remain in place regarding the application of AI. Applicants are still largely barred from using AI during most assessments and live interviews, unless explicitly instructed otherwise by Anthropic. The company clarified its position by stating in its candidate AI guidelines, “At Anthropic, we use Claude every day, so we’re looking for candidates who excel at collaborating with AI. Where it makes sense, we invite you to use Claude to show us more of you: your unique perspective, skills, and experiences.”
Anthropic’s initial ban was based on the premise of gaining a clearer understanding of applicants’ “personal interest” and “non-AI-assisted communication skills.” The recent policy change, however, acknowledges the widespread integration of AI in the workplace and aims to level the playing field. Anthropic itself utilizes Claude extensively in its hiring operations, including generating job descriptions, enhancing interview questions, and managing candidate communications. This internal reliance on AI prompted the realization that candidates should also have access to similar tools.
Jimmy Gould, Anthropic’s head of talent, commented on the updated policy via LinkedIn, stating, “This isn’t revolutionary, but it’s intentional. We recognize that deploying AI in hiring requires careful consideration around fairness and bias, which is why we’re experimenting, testing, and being transparent about our approach.” This indicates Anthropic’s commitment to continuously refining its AI usage policies in hiring, promising regular reviews and updates to align with evolving AI capabilities.
Anthropic has outlined specific scenarios where applicants are permitted to use Claude, emphasizing thoughtful and transparent use to showcase individual skills and perspectives. Candidates are encouraged to draft their primary application materials independently, then use Claude to “polish how they communicate about their work.” During take-home assessments, Claude can be used only when explicitly permitted. Applicants can utilize Claude to research Anthropic, practice interview responses, and formulate questions for their interviewers when preparing for interviews. However, AI assistance is strictly forbidden during live interviews unless there are specific instructions to the contrary.
This evolving stance from Anthropic reflects a broader trend and ongoing debate within the corporate world regarding AI’s role in recruitment. While some companies, like Goldman Sachs, maintain a strict prohibition on external sources, including generative AI tools, during their interview processes, others are actively embracing AI to streamline and improve hiring decisions. Many organizations are leveraging AI to navigate the complexities of modern recruitment, efficiently sort through vast numbers of applications, accelerate hiring timelines, and enhance the quality of their talent acquisition.
Companies such as KPMG, Eventbrite, and Progressive are deploying AI to improve their hiring processes. KPMG has reportedly cut interview scheduling time by nearly 60% and saved over 1,000 hours for its talent acquisition team through AI implementation. Progressive, an insurance giant, uses AI to parse hundreds of thousands of applications as it aims to hire 12,000 workers.




