Join or Log in
Abstract: Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand.
[1707.05373] Houdini: Fooling Deep Structured Prediction Models 
Added a year ago by Francis Tseng
Show info
[1707.05373] Houdini: Fooling Deep Structured Prediction Models 
Info
Abstract: Generating adversarial examples is a critical step for evaluating and improving the robustness of learning machines. So far, most existing methods only work for classification and are not designed to alter the true performance measure of the problem at hand.
1 Connection