Abstract: Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other.
[1901.00064] Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function) 
Added a month ago by Francis Tseng
Show info
[1901.00064] Impossibility and Uncertainty Theorems in AI Value Alignment (or why your AGI should not have a utility function) 
Info
Abstract: Utility functions or their equivalents (value functions, objective functions, loss functions, reward functions, preference orderings) are a central tool in most current machine learning systems. These mechanisms for defining goals and guiding optimization run into practical and conceptual difficulty when there are independent, multi-dimensional objectives that need to be pursued simultaneously and cannot be reduced to each other.
1 Connection