Skip to content

Ethics of Deep Learning

Types of errors

  • Accuracy doesn't tell the whole story
  • Type 1: False positive
    • Unnecessary surgery
    • Slam on the brakes for no reason
  • Type 2: False negative

    • Untreated conditions
    • You crash into the car in front of you
  • Think about the ramifications of different types of errors from your model, tune it accordingly.

Hidden biases

  • Just because your model isn't human doesn't mean it's inherently fair
  • Example: train a model on what sort of job applicants get hired, use it to screen resumes.
    • Past biases toward gender / age / race will be reflected in your model, because it was reflected in the data your trained the model with.

Is it really better than a human?

  • Don't oversell the capabilities of an algorithm in your excitement
  • Example: medical diagnostics that are almost, but not quite, as good as a human doctor
  • Another example: self-driving cars that can kill people

Unintended applications of your research

  • Think of how can this be twisted and be used in a different way.