AI: Ethics, Authority, and Accountability
Who Is Responsible When AI Is Used?
As AI becomes more present in our daily lives — in offices,
schools, media, churches, and even households — an important question emerges:
When AI is used, who is actually responsible?
The short answer is simple: humans are.
The longer answer is where ethics begins.
AI does not make decisions in isolation. It does not hold
authority, values, or intent of its own. Every output, suggestion, or response
exists because a human asked a question, set a direction, accepted a result, or
chose to act on what was produced. Responsibility, therefore, does not shift —
it remains with the user.
This matters deeply in Papua New Guinea, where authority is
traditionally relational rather than abstract. Chiefs, elders, pastors,
managers, and parents are not respected because of systems — they are respected
because they are accountable to people.
AI should be treated the same way: as a tool under human
authority, not a replacement for it.
Humanising AI — and the Ethical Line
Across cultures, humans naturally humanise powerful forces
to better understand them. We see this in religion, where complex spiritual
truths are expressed through human figures — Jesus, Buddha, prophets,
ancestors. This does not mean those figures are “ordinary humans”; it means
people understand the world through relationship.
AI follows the same pattern. Giving it a name, a voice, or a
personality helps people learn faster, ask better questions, and engage more
confidently. This is not dangerous in itself — it is normal.
But humanising something also brings responsibility.
If we speak to AI as if it understands us, then we
must also behave as if our words matter. Saying “please” and “thank you”
may seem small, but these habits reinforce respect, patience, and restraint —
qualities that should never be lost in digital spaces.
The danger is not that people are polite to AI. The danger is when people become careless, abusive, or dismissive — because
that behaviour rarely stays contained. How we practise ethics with tools often
reflects how we practise ethics with people.
Accountability Cannot Be Outsourced
One of the most common ethical mistakes is blaming
technology for human decisions.
- “The
AI told me to.”
- “The
system generated it.”
- “That’s
what the model said.”
These explanations may describe how something
happened, but they never explain why it was allowed to happen.
In journalism, law, education, health, and governance,
accountability must remain human. AI can assist research, summarise
information, or generate ideas — but judgement belongs to people. When
AI is wrong, misleading, biased, or harmful, the responsibility lies with
whoever chose to rely on it without question.
This is especially important in PNG, where trust is
personal. Blaming a machine undermines leadership. Owning decisions strengthens
it.
Ethics as a Daily Practice, Not a Rulebook
Ethics in AI does not begin with global frameworks or
technical policies. It begins with everyday choices:
- How
do I use this tool?
- Do I
check and question its outputs?
- Do I
take responsibility for what I publish, say, or act upon?
- Do I
use AI to lift others — or to shortcut integrity?
In PNG terms, ethics is not about perfection — it is about respect,
balance, and accountability to community.
AI, like fire or machinery or writing itself, can build or
destroy depending on how it is used. The difference is not the technology. The
difference is the person holding it.
The Core Principle
AI does not remove responsibility.
AI amplifies it.
As we step into 2026 — a year that will see AI used more
widely across Papua New Guinea — the guiding principle should remain clear:
Technology may assist us, but humanity must lead.
If we remember that, then AI becomes not a threat to dignity — but a tool that reflects our best values, when we choose to use it well.


Comments
Post a Comment