Moral dilemmas are a part of human nature. Though the actions can be condensed to simple yes or no checkboxes, the intentions however, fall in an intangible gray area, which is very tricky to decipher. Now, if machines, which are innately binary (0s & 1s), are tasked with similar decision-making at ethical crossroads, the chances for erratic behaviour are quite high. Only in this case, blaming wouldn’t improve machines. The domain of reinforcement learning especially, which tries to mimic the cognitive functions of the human mind, can be a good place to investigate machines in moral uncertainties.

Read more:

#reinforcementlearning #artificial-intelligence

Machines That Don’t Kill: How Reinforcement Learning Can Solve Moral Uncertainties
1.10 GEEK