Artificial intelligenceBig TechBreaking NewsCongressDefense DepartmentDonald TrumpmilitaryOpinionPhilosophyScience & TechnologySociety & Culture

Bad Philosophy Won’t Help Us Make AI Good – Nathan Beacom

Anthropic is perhaps the most well-intentioned major player in the AI industry. It began by breaking off from OpenAI, in part because it wanted to build safer AI tools, and it recently got into a scuffle with the Pentagon for refusing to let its tools be used to spy on Americans or to fire weapons without any human involvement. (On Friday, President Donald Trump ordered all federal agencies to stop working with Anthropic.) The company also recently released a sort of ethical constitution that it says is designed to keep Claude, the name for its major LLM product, good. These things are not unworthy of praise.

At the same time, a company that aims to build a purportedly super-intelligent moral actor will naturally do so under the auspices of the moral philosophy that the company’s leaders hold. Anthropic, and other AI companies, see a future in which their LLMs are used in ethical decision-making on the personal and societal scales. But what moral vision will this super-powered computer embody? If we examine the philosophy that motivates Anthropic, two things become apparent. One is that computers should not be in the business of  “making ethical decisions” at all. The other is that Congress, not the companies themselves—and not the executive branch—will need to regulate this technology in order to prevent it from becoming hugely destructive.

Dario Amodei, CEO of Anthropic, recently told New York Times columnist Ross Douthat that he didn’t know if his model was conscious or could become so. And yet he referred to the model as though it were. He fretted about wanting the model to “have a good experience” and has created an option for instances of the model to “quit their job” if it “wants.” He added that Anthropic employees are “looking inside the brains of the models to try to understand what they’re thinking.” This is the first major problem with Anthropic’s philosophy: Like other leading AI companies, it doesn’t seem to know the difference between a person and a computer. 

This outlook is shared by Amanda Askell, a Scottish philosopher whose job description at Anthropic is “research scientist” but whose real role is to make Claude moral. Per a recent Wall Street Journal profile, Askell, like Amodei, says that we should treat chatbots with “empathy,” a term that denotes subjective experience on the part of the chatbot. And she “marvels” at Claude’s “sense of wonder and curiosity about the world,” saying that she wants to help it to discover its own voice. She is the author of the Claude constitution.

Source link

Related Posts

1 of 620