General Ethical Considerations in AI (Non-Animal)
🔹

General Ethical Considerations in AI (Non-Animal)

Tags
Constitutional AI

AI can have far-reaching consequences beyond animal welfare that demand our moral consideration. To understand the general risks associated with AI development, we recommend the following resource:

To understand welfare considerations related to ‘Digital Minds’, we recommend the following resource:

Resources

Level

2 views

Level

Table

Introductory
The A.I. Dilemma
The A.I. Dilemma

Tristan Harris and Aza Raskin discuss how current AI capabilities already pose significant risks to a functional society. They highlight how AI companies are caught in a race to deploy technology as quickly as possible, often without sufficient safety measures. They also explore what it would mean to upgrade our institutions to navigate a post-AI world.

Sentience Institute
Sentience Institute

The Sentience Institute focuses on expanding humanity’s moral circle specifically in-re technology. Although they also aim to include animals into the moral circle, they also have a stronger focus on other areas including Digital Minds

AI Welfare Debate Week 2024
AI Welfare Debate Week 2024

This forum hosts a week-long debate on AI's role in animal welfare, bringing together various perspectives on how AI might influence the future of animal advocacy.

Collective Constitutional AI: Aligning a Language Model with Public Input
Collective Constitutional AI: Aligning a Language Model with Public Input

This is a project where 1,000 people helped shape AI rules in "Collective Constitutional AI: Aligning a Language Model with Public Input." Despite consensus on considering animal well-being, animals are unfortunately not mentioned in Anthropic's actual or public constitution.

Can AI feel distress? Inside a new framework to assess sentience
Can AI feel distress? Inside a new framework to assess sentience

This article discusses Jonathan Birch's book "The Edge of Sentience," which proposes a framework for assessing sentience across various entities, including AI. It advocates for a precautionary approach to protect potentially sentient beings, emphasizing the need for ethical considerations in the context of AI development.

AI Welfare Debate Week 2024
AI Welfare Debate Week 2024

This forum discussion covers the July 1–7 ‘AI Welfare Debate Week’ on the EA Forum, where users debated whether AI welfare should be a priority for effective altruism (EA). Supporters argued that focusing on AI welfare could safeguard the well-being of potentially trillions of artificial minds, while critics felt it detracts from more urgent issues like AI safety and alignment. The debate also addressed practical challenges, including defining AI welfare and the risks of early interventions.

Understanding the moral status of digital minds
Understanding the moral status of digital minds

The complex issue of determining whether future AI systems might possess moral status and what that could mean for ethical decision-making in AI development.

Intermediate
The Good, the Not-So-Good, and the Ugly of the UN's Blueprint for AI
The Good, the Not-So-Good, and the Ugly of the UN's Blueprint for AI

A former general counsel of the U.S. Department of Commerce, discusses the challenges and potential improvements of the UN's draft AI governance plan. It proposes a more flexible and agile approach to global AI cooperation.

Podcast Recommendation: The Great Simplification
Podcast Recommendation: The Great Simplification

This podcast emphasizes the need to consider the nth-order effects of solutions, with specific mention of artificial intelligence.

Understanding the moral status of digital minds
Understanding the moral status of digital minds

This article from 80,000 Hours calls for increased focus on the moral status of digital minds, encouraging the development of a research field dedicated to understanding and advising on the ethical implications of AI and other digital entities.

CyberOctopus: New AI Explores, Remembers, and Seeks Novelty
CyberOctopus: New AI Explores, Remembers, and Seeks Novelty

This news article discusses the development of "CyberOctopus," an AI system inspired by the neural circuits of sea slugs and octopuses. Created at the University of Illinois, the AI uses associative learning and episodic memory to navigate new environments, seek rewards, and adapt in real time, showcasing advancements in AI learning and memory processing.

Technical
What Do People Think About Sentient AI?
What Do People Think About Sentient AI?

This research paper presents findings from the Artificial Intelligence, Morality, and Sentience (AIMS) survey, the first nationally representative study on public attitudes toward sentient AI. Conducted in 2021 and 2023 with over 3,500 participants, the survey found that moral concern for AI had significantly increased, with 71% agreeing that sentient AI deserve respect and 38% supporting legal rights for AI. However, opposition to advanced AI technologies is also rising, with most respondents supporting a ban on sentient AI and AI smarter than humans.

Evaluating the Moral Beliefs Encoded in LLMs
Evaluating the Moral Beliefs Encoded in LLMs

This paper presents a case study on surveying large language models to elicit their encoded moral beliefs. It introduces statistical methods to quantify LLMs' choices, uncertainty, and consistency. By administering a survey with 1,367 moral scenarios to 28 LLMs, the study finds that in unambiguous cases, models align with commonsense, while in ambiguous cases, they often express uncertainty or show varied preferences, with closed-source models tending to agree.

image