AI can have far-reaching consequences beyond animal welfare that demand our moral consideration. To understand the general risks associated with AI development, we recommend the following resource:
- The A.I. Dilemma by Centre for Humane Technology (1 Hour 7 Minute Watch)
To understand welfare considerations related to ‘Digital Minds’, we recommend the following resource:
- Understanding the Moral Status of Digital Minds by Cody Fenwick (35 Minute Read)
Tristan Harris and Aza Raskin discuss how current AI capabilities already pose significant risks to a functional society. They highlight how AI companies are caught in a race to deploy technology as quickly as possible, often without sufficient safety measures. They also explore what it would mean to upgrade our institutions to navigate a post-AI world.
The Sentience Institute focuses on expanding humanity’s moral circle specifically in-re technology. Although they also aim to include animals into the moral circle, they also have a stronger focus on other areas including Digital Minds
This forum hosts a week-long debate on AI's role in animal welfare, bringing together various perspectives on how AI might influence the future of animal advocacy.
This is a project where 1,000 people helped shape AI rules in "Collective Constitutional AI: Aligning a Language Model with Public Input." Despite consensus on considering animal well-being, animals are unfortunately not mentioned in Anthropic's actual or public constitution.
This article discusses Jonathan Birch's book "The Edge of Sentience," which proposes a framework for assessing sentience across various entities, including AI. It advocates for a precautionary approach to protect potentially sentient beings, emphasizing the need for ethical considerations in the context of AI development.
This forum discussion covers the July 1–7 ‘AI Welfare Debate Week’ on the EA Forum, where users debated whether AI welfare should be a priority for effective altruism (EA). Supporters argued that focusing on AI welfare could safeguard the well-being of potentially trillions of artificial minds, while critics felt it detracts from more urgent issues like AI safety and alignment. The debate also addressed practical challenges, including defining AI welfare and the risks of early interventions.
The complex issue of determining whether future AI systems might possess moral status and what that could mean for ethical decision-making in AI development.
A former general counsel of the U.S. Department of Commerce, discusses the challenges and potential improvements of the UN's draft AI governance plan. It proposes a more flexible and agile approach to global AI cooperation.
This podcast emphasizes the need to consider the nth-order effects of solutions, with specific mention of artificial intelligence.
This article from 80,000 Hours calls for increased focus on the moral status of digital minds, encouraging the development of a research field dedicated to understanding and advising on the ethical implications of AI and other digital entities.
This news article discusses the development of "CyberOctopus," an AI system inspired by the neural circuits of sea slugs and octopuses. Created at the University of Illinois, the AI uses associative learning and episodic memory to navigate new environments, seek rewards, and adapt in real time, showcasing advancements in AI learning and memory processing.
This research paper presents findings from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, the first nationally representative study on public attitudes toward sentient AI. Conducted in 2021 and 2023 with over 3,500 participants, the survey found that moral concern for AI had significantly increased, with 71% agreeing that sentient AI deserve respect and 38% supporting legal rights for AI. However, opposition to advanced AI technologies is also rising, with most respondents supporting a ban on sentient AI and AI smarter than humans.
This paper presents a case study on surveying large language models to elicit their encoded moral beliefs. It introduces statistical methods to quantify LLMs' choices, uncertainty, and consistency. By administering a survey with 1,367 moral scenarios to 28 LLMs, the study finds that in unambiguous cases, models align with commonsense, while in ambiguous cases, they often express uncertainty or show varied preferences, with closed-source models tending to agree.