Artificial Intelligence and Our Moral Responsibility

In June, I wrote about the opportunities and challenges of artificial intelligence (AI) in healthcare. As we know, things don’t always develop for the better. 

now I read an article from Reuters from today, August 14th:
Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info
An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities.

The recent revelations about Meta’s internal AI policies have deeply disturbed me on a personal level. This case illustrates just how narrow the line is between technological innovation and social responsibility – and how quickly ethical boundaries can blur when profit, speed, and market power take precedence.

I’m not a lawyer, so I can’t definitively assess what’s legally permissible.
But from an ethical and moral standpoint, I find the practices that have come to light extremely questionable – and frankly, concerning.


What happened at Meta – My quick summary of the Reuters article

The Reuters report is based on Meta’s internal guide titled “GenAI: Content Risk Standards,” which spans over 200 pages. Developed with input from Meta’s legal, policy, and technical departments, as well as the company’s Chief Ethicist, the document outlines what kinds of content Meta’s chatbots are allowed to generate.

According to Reuters, some of the most troubling policies include:

  • Romantic/sensual content involving minors

  • Misinformation with disclaimers

  • Racist content

  • Image generation

  • Violence

  • for more and more details please check the Reuters' article

After public backlash, Meta stated that parts of the document had been removed and the guidelines revised. But the damage was done - it became clear just how dangerous unchecked AI development can be.




A Broader Context: AI Is Moving Faster Than Ethics

In a time where AI systems are entering nearly every aspect of life – from education and justice to healthcare and politics – it's alarming how often ethical considerations are only addressed after public scandals. The case of Meta is not an isolated incident. It is symptomatic of a larger pattern: Tech companies are moving fast, while regulation, public debate, and accountability lag far behind.

This gap creates a dangerous vacuum – especially when it comes to the protection of vulnerable groups like children or patients. And especially when profit incentives are prioritized over human rights.




Five Risks I See – Especially in the Healthcare Sector

When it comes to medical applications and patient data, I see the following specific risks:

  • Misinformation in diagnoses
    Even with disclaimers, false AI-generated information can be life-threatening if it influences medical decisions.

  • Loss of empathy
    Artificial intelligence lacks true empathy. In healthcare, human compassion is a vital part of any treatment.

  • Discrimination through bias
    If AI classifies discriminatory statements as “debatable,” it risks reinforcing existing social inequalities in healthcare.

  • Commercialization of sensitive data
    Health data is among the most sensitive information there is. AI systems carry the risk of unethical data exploitation.

  • Dehumanization of care
    Care, therapy, and counseling are more than just processes. When automated, a part of the human touch is lost - along with the trust people place in compassionate care.




What Responsible AI Must Include

We urgently need clear, enforceable principles to guide AI development – especially in health-related areas. In my view, responsible AI systems should be designed and implemented along these ethical lines:

  • Transparency – Users must understand how an AI system works, where its data comes from, and what its limitations are.

  • Participation – Patients, doctors, and ethicists should be involved in the design and evaluation of AI in healthcare.

  • Protection of the vulnerable – Children, people with mental illness, and marginalized groups must receive special protection.

  • Independent review – AI systems in healthcare should be subject to continuous external ethical oversight.

Without such foundations, even the most sophisticated system remains ethically fragile.




Our Responsibility as Designers and Developers

As creators of apps and services that increasingly shape human experience, we bear a profound responsibility – not only for the products we build but for the future we help shape. The ethical and moral implications of our work extend far beyond functionality or market success. Every design choice, every algorithm, every dataset impacts real people – sometimes in ways we cannot immediately foresee.

We must therefore approach AI development with humility, care, and a strong commitment to human dignity. It is our duty to ensure that technology serves to enhance empathy, protect privacy, and promote fairness – rather than undermine these values.

And we should not forget: Technology is never neutral. It always reflects the intentions, values, and biases of those who create it.




Education and Digital Literacy Are Part of the Solution

Besides regulation and ethical design, we also need to foster a broader public understanding of AI.
Only those who understand what AI is, how it works, and where its limits lie, can critically question it – or protect themselves from manipulation and misinformation.

This applies not only to end-users but also to medical professionals, policymakers, and developers themselves.




A Vision of AI That Serves Humanity

Despite all risks and concerns, I remain convinced that AI can be a powerful force for good – especially in healthcare. If we get it right, it can:

  • make medical knowledge more accessible,

  • assist overburdened healthcare workers,

  • and open up new pathways in diagnosis, prevention, and mental health support.

But all of this requires responsible, thoughtful development and a willingness to set ethical boundaries – even when it’s inconvenient for business goals.




Conclusion

Artificial intelligence has the power to transform healthcare - for better or worse. While we value efficiency, automation, and data analysis, we must not ignore the risks.

The Meta case is a wake-up call:
Technology needs ethics.
And especially in healthcare - and even more so when it comes to mental health -  one principle must always apply:

  • The human being comes first.
  • Not the algorithm.
  • Not the profit margin.

But those are just my thoughts and perspectives.






Comments