Trust and faith in machine agents
So how can I ever trust a machine, an algorithm, or a model if I have no material or social relationship with it?
What is the cost of trust?
If my close friend lies to me, misleads me, or hurts me - my trust in them is damaged. There are social and material consequences for lost trust.
So how can I ever trust a machine, an algorithm, or a model if I have no relationship with it?
“AI Trust” seems misguided.
I cannot trust a machine because it is incapable of repair. It is incapable of remorse. It is incapable of any meaningful relationship to me.
Trusting objects, trusting subjects
Any “trust” I invest or afford a machine is the same sort of cheap, weak trust I give to cognitive shortcuts or objects.
But if algorithmic agents and machines have agency, then they demand more trust than I could possibly give them. Is this inherent divide, this fundamental lack of relationship, what the field of “AI Trust” is trying to research and explore?
Trust? Don’t you mean “faith?”
“Trust” in agents that have power (real or imagined) but have no ramifications (materially or socially) for their agency isn’t trust. This is faith.
What researchers seem to be exploring is just “faith” in AI and not actual trust. This is modern-day religious activity.