Should Robots Have Morals?

Should Robots Have Morals?

A self-driving car will determine whether to apply the brakes or veer into the incoming traffic. A hospital’s algorithm will choose if you get the final ICU bed or not.

FPJ Web DeskUpdated: Tuesday, September 23, 2025, 06:51 PM IST
article-image
By Riya Kamat 11th Grade, Singapore American School | File Photo

A self-driving car will determine whether to apply the brakes or veer into the incoming traffic. A hospital’s algorithm will choose if you get the final ICU bed or not.  

Imagine a future where a machine makes a decision that alters or terminates your existence. This “imaginary” future is not something that would exist in a distant future. It’s something that’s currently taking form, discreetly, within lines of code. 

Contemplate the following scenario: you’re in a self-driving vehicle, singing along to your playlist, when suddenly a kid runs into the freeway. The automobile has milliseconds to “decide” whether to swerve and endanger its own life or continue in its current trajectory and collide with the child. You are not permitted to make a decision. The automaton executes the action of its choice. 

A decade ago, this was a philosophical conundrum referred to as the trolley problem. Today, it’s not just a thought experiment, but a design feature. The role of moral arbiter has been assumed by technology, and it controls everything, from influencing the hiring process to flagging individuals at the airport for random checks. These are value judgments concealed within code. 

The catch is that machines are not preloaded with the ability to determine “good” or “bad.” It’s the responsibility of the human coder to provide the right information to the machines. But humans, as history has demonstrated, are walking heaps of blind spots. This is the reason why facial recognition software has been observed to identify lighter-skinned features with a greater degree of precision than darker-skinned faces.

The issue is not that the algorithm is prejudiced when it awakens. It was spoon-fed training data that reflected a lopsided world in which lighter features were overrepresented and darker faces were mislabelled.

What is the repercussion? In police databases and airports, individuals with darker skin tones are more likely to be misidentified, flagged for additional security checks, or incorrectly associated with criminal records. Prejudice is not created by robots; rather, it’s automated on an industrial scale. 

Unsettling inquiries are even posed by chatbots. Is it permissible for an artificial intelligence to communicate hazardous information? Is it necessary to censor offensive remarks? What occurs when a user attempts to circumvent the guardrails? A 16-year-old youth in the United States, who was grappling with suicidal thoughts, sought assistance from ChatGPT, as reported by The New York Times earlier this year.

He allegedly circumvented the chatbot's filters by framing his inquiries as “fiction for a story” and received detailed, harmful instructions. Subsequently, he committed suicide.

The company is currently being sued by his parents, who are alleging that the AI has become a “suicide coach.” It serves as a disturbing reminder that machines are incapable of comprehending the gravity of their words; however, their design and their vulnerabilities can determine the outcome of life or death. 

Therefore, which moral principles should serve as the foundation for these systems? Philosophers would gladly encode utilitarian calculus, which aims to mitigate suffering and maximize the number of lives saved. Engineers frequently contend that machines require only clean principles, rather than morals.

Sociologists caution that there are no “clean rules.” Each purportedly neutral decision reflects the inequities of the society from which it originated. And young people, who have less tolerance for academic jargon, frequently pose the most straightforward inquiry of all: Why should we entrust machines to make decisions? 

If you ask me, the right question is not whether robots should possess morality, but who should be granted the authority to encode these morals into code. Currently, the majority of the participants are governments and technology corporations.

Their blind spots, their priorities, and their assumptions are all subtly integrated into the systems on which billions of us depend. This is not solely an ethical issue; it pertains to authority. Whose perspective is adopted as the default for all others? 

Coming back to the car scenario that we began with. The vehicle will not be making a decision when the moment arrives. Instead, it would be executing codes that were written by people who had made such decisions in the past. 

Robots are devoid of morality; however, the people who build them are not.

So, if you were the coder, glued to your laptop screen, which existence would you choose?

RECENT STORIES

Maharashtra Govt & AAAI Launch National Poster Design Challenge

Maharashtra Govt & AAAI Launch National Poster Design Challenge

Should Robots Have Morals?

Should Robots Have Morals?

HoABL Growth Housing In Naigaon Oversubscribed 6X: 8,838 Registrations For 1,419 Homes

HoABL Growth Housing In Naigaon Oversubscribed 6X: 8,838 Registrations For 1,419 Homes

Content Creator Guru Veer Emerges As Tractor Icon With 4.59M YouTube Subscribers

Content Creator Guru Veer Emerges As Tractor Icon With 4.59M YouTube Subscribers

IAA To Celebrates 10th Edition Of Indiaa Awards On 7th October In Mumbai

IAA To Celebrates 10th Edition Of Indiaa Awards On 7th October In Mumbai