> It would be a shame if ML also became a field people avoided because they didn't want to contribute to evil, in their own view.
Both ethics and privacy considerations have recently become pretty regular at Computer Vision and Multimedia Processing conferences.
One very popular object detection model (called YOLO) had the main author recently leaving the field because he got concerned about military applications using his research results.
Privacy is fine. It's a valuable technology all of us pay for and desire better advances in. There's an entire field in CS/math known as cryptography, which is basically a subset of privacy.
Ethics, however, is a humanities field. People in different political affiliations have widely diverging views on it. It will undoubtedly be used to promote the views of one political affiliation over others. Suppose you need to create a technology that can be used for war in order to better treat cancer? Who gets to choose who lives or dies?
While I think general awareness of ethics concerns is needed, I think it might also sometimes bias research directions in itself.
I.e., dealing with ethics concerns and/or ethics committees becomes a huge additional workload in itself, so the research is prioritized to minimize dealing with it.
For example, one might stop a research which might help to treat cancer but dealing with the necessary approvals for patient data makes it unfeasible. Instead you switch to a general purpose target domain where it suddenly (unintentionally) could be used for war instead, but being general purpose it does not need to be approved by the ethics committee..
These are all hard questions, but what I personally want to avoid is humanities people (or worse, business people) making all the decisions without tech people having input. Finance is a field where there were strong consequences for the misuse of mathematics -- in particular, the use of "value at risk" as a complete and sufficient risk metric or even worse a target/KPI was something widely seen by the mathematical types around as a disaster in the making and by the business types as a great tool for doing whatever the f*(& they wanted and papering it over with math. Look where that got us.
Cryptography is actually a great example. Gauss called number theory the "queen of mathematics" and several key mathematicians (Hardy) escaped there as they figured it could never, ever be used for political purposes or anything else. And then oops, cryptography comes along and it's built entirely on number theory. You never know.
Humanities people know what they know, and I respect that they've done stuff; but I'm sure not going to bow out of the conversation and hand all the ethics stuff off to them. While some really dig deep, some have no idea what the actual technology can do! There was this long thread recently about Proctorio and McGraw-Hill. Is it right for ML researchers who know about the shittiness of facial recognition to simply say "yeah whatever, do whatever you want to students who are more or less trapped by this system, we won't make a peep"? It's improbably that a NeurIPS paper addendum is going to make a huge difference in that particular problem, but we can 1) practice thinking about these things in preparation for disputes we can take part in, 2) provide ideas and information for journalists, politicians, and humanities folks who'll get involved along the way, 3) develop a habit of at least talking about it.
And last, NeurIPS has so many people/teams submitting that I figured it would be inevitable that more checkboxes appear on the checklist for inclusion -- thinning mechanisms always appear when necessary to slow the flow. If not this, it'd be something else.
Both ethics and privacy considerations have recently become pretty regular at Computer Vision and Multimedia Processing conferences.
One very popular object detection model (called YOLO) had the main author recently leaving the field because he got concerned about military applications using his research results.
https://medium.com/syncedreview/yolo-creator-says-he-stopped...