Ethics Around Us
- Feb 15
- 3 min read
By Sarah Perez Feb. 11, 2026
Do the ends justify the means? Are some rules absolute, regardless of the situation? Should we judge actions by the principles they follow, or the results they produce? Different people may have different answers to ethical questions based on their ethical principles.
Ethical principles are formed by an individual's worldview—the specific outlook and standards they use to approach life, often influenced by culture, education, religion or family. When facing a situation with no perfect solution, people often turn to their ethical principles to respond to the dilemma. Failures to agree on ethical options shape many of the big ethical questions asked today.
Utilitarianism and Deontologicalism are two main ethical theories often used to navigate ethical dilemmas. Utilitarianism is a specific kind of consequentialism—a theory judging options based on consequences—that chooses the option in an ethical dilemma that brings the most “good” to the greatest number of people, regardless of the moral quality of the action. Meanwhile, Deontologicalism focuses on moral standards, choosing the more “moral” action instead of the one with the better result.
Ethical dilemmas can be seen in real life. According to the American Psychological Association, medical professionals must weigh possibilities and priorities when ethical values conflict. Like the utilitarian worldview, physicians can make choices such as honoring religious preferences in treatment, and trusting the consent of cognitively impaired patients—ultimately choosing an option that brings the most “good.” The Code of Medical Ethics of the American Medical Association outlines the responsibility of a physician to seek changes in the requirements of a law to serve a patient’s best interests. However, they are still required by law to violate patient confidentiality for specific cases like reporting communicable diseases and abuse, negating complete decision-making power.
“Doctors should avoid their personal worldviews by disconnecting their opinions from their career—the laws regarding ethical situations should be stricter and varied to different cases so emotions are not involved. It could also prevent emotional trauma for doctors who believe they made a ‘wrong’ choice,” Sophomore Sivan Kotler said.
In our modern era, ethical dilemmas are not only relevant to humans—they raise challenges for technology as well. In 2021, the United Nations Educational, Scientific and Cultural Organization (UNESCO) created the Recommendation on the Ethics of Artificial Intelligence, a set of 10 policy suggestions protecting “human rights and dignity.” The recommendations, based on the principles of transparency and fairness, aim to regulate the impacts of AI on society. Recently, the usage of AI in hiring practices has raised controversy for violating UNESCO’s Fairness and Non-Discrimination Principle through its susceptibility to inaccuracies and discrimination. Humans make decisions based on their worldviews, but AI makes them through predictions based on its training data—often disadvantaging groups that are less represented in the data. Training AI to sort through resumes based on an existing workforce reflects the biases of employers, a step backwards for the improvements to social justice UNESCO outlines.
“AI training data is drawn from publicly available information that represents the biases we hold as people—this makes it difficult to separate the two. If no person is completely unbiased, then AI is not either—it is not possible to change the training data to perfectly unbiased information because it does not exist,” Junior Angelica Liljenstam said.
Ethical dilemmas play a significant role in other AI-powered technologies, such as autonomous vehicles (AVs). For example, an AV may face a situation where a bicycle suddenly enters its path, forcing a choice between swerving into traffic or hitting the cyclist. The RSS (Responsibility Sensitive Safety) system in AVs is a formula for a safe following distance that, given other drivers follow traffic rules, should eliminate the possibility of a collision. For AVs to make a judgement in an ethical dilemma, the Journal of Law and Mobility argues, they should follow the obligations that make up the driving social contract between people. This social contract would include reasoning like the AV taking the utilitarian option and swerving into traffic, because that would prevent harm to the bike rider. Following this social contract would require translating these ethical principles into engineering requirements—a challenge that could potentially reflect the engineers’ biases.
Ultimately, the search for ethical solutions is challenged by competing worldviews. Ethical dilemmas that arise in professional contexts may have suggestions on ethics, but solutions are still mostly ambivalent.
About the Contributor

Sarah Perez
staff writer
Sarah Perez is a sophomore Staff Writer at Leland High School. Some of her hobbies include figure skating, reading, hiking, and sleeping.







Comments