Who is Held Accountable?
Published on
Something I've been ruminating on lately pertains to the use of first-person language in LLM outputs. I think it should have never happened and should summarily be rethought.
How do you accept an apology from that which cannot feel regret, nor understand harm, nor learn or adapt—genuinely does not “know”?
Who originated the harm implied by the apology? The company that trained the model? Or are we now considering it self-inflicted?
We might want to consider what “I will strive to do better” means when outputted by a corporate-owned LLM.
Who or what exactly strives in this occasion?
And who is responsible for the impact of their/its endeavors.
We must advocate for deanthropomorphizing AI.
It should not be positioned as “thinking”. It should not “apologize” because it won't/can't be held accountable. It can't be “clear” about things because it doesn't “know” anything. It doesn't “hallucinate”, “learn”, or “strive”.
Computers are not peers.