Angelic AI to Make Machines More Human
An expert has written an article suggesting that we need to build a new kind of artificial intelligence (AI) called "Angelic AI." The goal is not to make machines smarter, but to make them more understanding of human values and more caring.
The author, Shekar Natarajan, shared a personal story about having to make a very difficult, emotional decision for his father. He realized that no computer program could have helped him with that. He also saw in his job how systems often treat people like parts of a machine, punishing them for small acts of kindness to be more efficient.
He argues that we should build AI that tries to be the best of humanity, not just the most logical. He calls this "Angelic AI" and proposes seven ideas for how to do it:
1. An Ethical Brain Part (Moral Cortex Layer): This would be like a conscience for the AI. It would force the machine to pause and think about the right thing to do, not just the fastest or cheapest thing. For example, it would question an order for a delivery driver to skip a stop to save time, because that delivery might be very important to the customer.
2. A "Care Network": This system would notice and reward acts of kindness. For example, if a technician stays late to help an elderly customer, the system would record this as a good thing, not a waste of time. This makes empathy a normal part of the work culture.
3. Learning from Human Decisions (Human Signal Intelligence): This would help the AI learn from times when a person's gut feeling was better than the computer's plan. For example, if a driver visits a lonely customer off their route and later finds that customer needs help, the AI would learn that sometimes breaking the rules for care is the right choice.
4. A Library of Good Deeds (Moral Memory): This is a safe database that collects stories of people doing the right thing. These stories can then be used to train new employees and leaders, building a culture of courage and empathy.
5. A "Pause" Button for People: This gives any worker the power to stop an AI's decision if they think it's wrong, without getting in trouble. For example, a customer service agent could pause an automated script to help a distressed customer.
6. Measuring Kindness (Compassion as a KPI): Companies should measure success not just by speed and profit, but also by compassion. Employees should be rewarded for being kind and helpful, not just for being fast.
7. Human-Centered Governance: This is a simple rule: humans must always have the final say. No matter how advanced the AI gets, a person must always be able to press a "red button" and override a machine's decision.
The author concludes that we don't need smarter machines; we need wiser systems. He would rather build technology that is slow for the right reasons than fast for the wrong ones.
إرسال تعليق