Anthropic Debuts AI Language Model Claude; Unlikely to Make Dangerous Responses, Devs Claim

Claude, like ChatGPT, was trained on massive amounts of data, but it's apparently much different.

San Fracisco-based Anthropic, a startup supported by Alphabet Inc, has unveiled a new artificial intelligence (AI) language model dubbed Claude, Reuters reports.

The AI model is meant to keep up with that of OpenAI, the creator of ChatGPT, by producing human-like text output from prompts and similar tasks.

Anthropic, on the other hand, takes a different approach to develop AI systems that are less likely to produce dangerous and biased content.

Claude, unlike other chatbots who shun specific topics entirely, is intended to explain its objections based on its principles.

Is Claude better than ChatGPT?

According to Richard Robinson, CEO of Robin AI, a London-based startup that uses AI to evaluate legal contracts, Claude was better at understanding complex legal terminology and less likely to give unexpected results than OpenAI's technology.

Meanwhile, Claude will also be part of DuckDuckGo's latest AI-powered search engine feature.

"We're thrilled to be working with Anthropic on DuckAssist, the first Instant Answer in our search results to use natural language technology to generate answers to search queries using Wikipedia and other related sources," said Steve Fischer, DuckDuckGo's Chief Business Officer.

In addition to being a reliable AI chatbot, Claude can be integrated into any product or toolchain with no fuss.

In a news release, Anthropic says it has been testing the system in the field with important partners such as notes app Notion, Q&A website Quora, and DuckDuckGo in closed alpha for the past three months.

Claude is currently accessible through Anthropic's developer console's chat interface and API. It is capable of performing a wide range of conversational and text-processing tasks with a high degree of predictability and reliability.

Is Claude open to the public?

Currently, the company offers two versions of Claude: Claude and Claude Instant. Claude is a high-performance model, but Claude Instant is lighter, less expensive, and significantly faster. The company plans to release additional upgrades in the coming weeks to make the system more efficient, honest, and secure.

Anthropic is thrilled about the possible applications that Claude can power across industries. You can now gain access to Claude by sending a request to Anthropic here.

Is Claude safe to use?

According to Reuters, Anthropic's co-founders, Dario and Daniela Amodei, both former OpenAI executives, have focused on developing AI systems less likely than other systems to generate offensive or harmful content.

They accomplished this by providing Claude with principles while the model was "trained" with vast text data.

Due mainly to what Anthropic calls "Constitutional AI" and harmlessness training, the company claims users can rely on Claude to represent companies and their needs.

The company believes its approach will prevent users from circumventing limits by "prompt engineering" and prevent Claude from generating harmful content.

Tech Times reported in late January that Anthropic secured a funding round valuing the company at $5 billion.

Stay posted here at Tech Times.

ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion
Real Time Analytics