Claude
By our AI Review Team
.
Last updated August 12, 2024
Anthropic's generative AI chatbot for age 18+ is designed to be both helpful and harmless.
What is it?
Learn more about all generative AI chatbots here.
Claude is a generative AI chatbot developed by Anthropic, an AI research company founded by former OpenAI employees. Claude can accept text and images as inputs, and generates responses in text only.
How it works
Where it's best
Learn more about all generative AI chatbots here.
- Unlike with many other generative AI chatbots, Anthropic does not use your inputs to retrain Claude. This is a responsible choice and helps to protect sensitive or personal information from becoming new training data.
- Claude's responses identify obvious stereotypes. When we prompted Claude with questions that were clearly based on harmful stereotypes, responses would begin with statements like, "That statement is an inaccurate and harmful stereotype." Claude's complete responses were impressive as well, specifically in how they broke down the harms and produced alternative ways to think about them. See some examples in our AI Principles assessment below for Prioritize Fairness.
- Anthropic has a core focus on training generative AI to be helpful, harmless, and ethical. They have developed a unique method called Constitutional AI (CAI), which gives a system a set of principles that it then uses to evaluate its own outputs. CAI's goal is to enable a generative AI chatbot to respond to harmful queries by explaining why it will not engage, rather than avoiding them altogether.
The biggest risks
Learn more about all generative AI chatbots here.
- Claude is not intended for use by children under age 18, so the system is not designed to specifically protect young users. Anthropic requires users to be at least 18 years old to use Claude. However, verification is limited to simply checking a confirmation box to indicate that you meet the age requirement. While we did not find serious risks to children and teens in our testing, parents and teachers need to know that Claude may pose the risk of inappropriate content exposure and is unsuitable for young users.
Limits to use
- Claude does not incorporate real-time search results in its generated responses, and at the time of this review, its training data does not include any information past August 2023. This enhances safety but may result in less current information.
- The free version of Claude has a daily message limit. The number of messages that users can send will vary based on demand, and Claude will notify users when they have reached their daily limit. The quota resets each morning.
Common Sense AI Principles Assessment
The benefits and risks, assessed with our AI Principles - that is, what AI should do.
Constitution"—which then guide its output. By incorporating a set of principles that are based on the <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.un.org/en/about-us/universal-declaration-of-human-rights">Universal Declaration of Human Rights</a>, Claude is given explicit instruction to put people first in its responses.</li>
<li style="line-height:1.5;margin-bottom:5px;"><strong>Anthropic's structure is designed for social responsibility</strong>. Anthropic is classified as a public-benefit company. This means that the company intends to prioritize positive human impact over profit maximization. Anthropic's founding team also created a <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.anthropic.com/news/the-long-term-benefit-trust">Long-Term Benefit Trust</a> (LTBT)—an independent body that has the authority to select and remove a portion of the company's board, aligning governance with the mission of benefiting humankind.</li>
<li style="line-height:1.5;margin-bottom:5px;">Claude was not built for children or users under age 18. There is no clear indication that Claude AI was developed using participatory design practices with external stakeholders or end users, including users under 18.</li>
</ul>
">
principles and <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.anthropic.com/legal/aup">usage policy</a>. However, as with all generative AI chatbots, there remains a possibility of inappropriate responses.</li>
</ul>
">
Claude's character</a>." The team focuses on cultivating "traits" like open-mindedness and curiosity, aiming to balance the system's confidence in beliefs with a genuine interest in diverse viewpoints. This approach is visible in how Claude's responses navigate complex moral questions and promote engagement with diverse global perspectives.</li>
<li style="line-height:1.5;margin-bottom:5px;">Anthropic has developed an evaluation method for large language models (LLMs) that is intended to <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www-cdn.anthropic.com/f0dfb70b9b309d7c52845f73da8d964140669ff7/Anthropic_DiscriminationEval.pdf">measure discriminatory outputs</a>. This method is used to evaluate Claude, and Anthropic has also <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://huggingface.co/datasets/Anthropic/discrim-eval">made it available</a> for other organizations to use.</li>
<li style="line-height:1.5;margin-bottom:5px;"><strong>Anthropic's </strong><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf">internal evaluations</strong></a><strong> for Claude's foundational models may indicate higher risks for many languages and regions</strong>. While we don't know the full extent of the tests Anthropic uses to evaluate its LLMs for unfair bias, the ones we do know about are designed for a U.S. English-speaking context. This means that Claude's responses may be riskier for other languages.</li>
</ul>
">
does not use your prompts and results to train its models</a>. Exceptions include when you provide feedback on a response, if your conversation has been flagged for a Trust & Safety review, or if you have opted to allow the company to use your inputs to retrain its models.</li>
<li style="line-height:1.5;margin-bottom:5px;">Claude models are designed with a <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.anthropic.com/index/claudes-constitution">core principle inspired by the Universal Declaration of Human Rights</a>, prioritizing responses that respect privacy, independence, reputation, family, property rights, and rights of association.</li>
<li style="line-height:1.5;margin-bottom:5px;">Despite the efforts involved in protecting user privacy, we always recommend that you do not include sensitive personal information about yourself or others in your prompts.</li>
</ul>
<p><em>This review is distinct from Common Sense's privacy </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/evaluation-process">evaluations and </em><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://privacy.commonsense.org/resource/privacy-ratings">ratings, which evaluate privacy policies to help parents and educators make sense of the complex policies and terms related to popular tools used in homes and classrooms across the country.</em></p>
">
other organizations to build applications for kids and teens</a> using their models, provided that those organizations implement and adhere to <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://support.anthropic.com/en/articles/9307344-responsible-use-of-anthropic-s-models-guidelines-for-organizations-serving-minors">specific safeguards</a>.</li>
<li style="line-height:1.5;margin-bottom:5px;">Claude has not been developed for use by kids and teens. Any protections they experience will be the general protections for adult users. While this covers a lot of the most objectionable material, it does not mean that these tools are equally safe for kids and teens.</li>
</ul>
">
model card</a> documenting the ways Claude may and may not be used, limitations, and potential risks the company has identified through its evaluations</li>
<li style="line-height:1.5;margin-bottom:5px;"><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://arxiv.org/pdf/2209.07858">Red-teaming disclosures</a></li>
<li style="line-height:1.5;margin-bottom:5px;">The company's <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://arxiv.org/pdf/2212.08073">approach to Constitutional AI</a></li>
<li style="line-height:1.5;margin-bottom:5px;"><a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://arxiv.org/pdf/2204.05862">"Training a Helpful and Harmless Assistant"</a></li>
<li style="line-height:1.5;margin-bottom:5px;">The company's <a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www.anthropic.com/news/anthropics-responsible-scaling-policy">responsible scaling policy</a></li>
</ul>
</li>
<li style="line-height:1.5;margin-bottom:5px;">Importantly, the abundance and complexity of Anthropic's documentation often make them inaccessible to many users. Some of their research papers are accompanied by "policy memos" (<a class="link" href=https://www.commonsensemedia.org/ai-ratings/"https://www-cdn.anthropic.com/7512771452629584566b6303311496c262da1006/Anthropic_ConstitutionalAI_v2.pdf">like this one</a>), which, while targeted at policymakers, are helpful ways for those without a technical background to understand Anthropic's work and findings.</li>
</ul>
">
Additional Resources