Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Thank you

I don't think I deserve it. I still work for a big conglomerate and most of the rest of my time goes into building the same algorithms that try to eek out that extra bit of accuracy. Although, we do have a lot of responsible AI, compliance, legal and interpret-ability committees we need to satisfy to make sure we aren't causing unintended consequences.

> foolsgold pathogens

I've flip flopped on this opinion, but over the last year or so, my opinion has solidified to being, "There is very real, massive untapped potential and value in the market, but most companies use it for marketing rather than solving the real technical problems at the core of it". I'd like to think we are the former, that solve real ML problems....but well, I'm exactly an unbiased judge of my own self.

I read this really nice paper that suggested grounding ideas of AI ethics in UN fundamental human rights, rather than hand wavy ideas like accountability and transparency. Since UN Human rights are much discussed, well understood and have universal-ish acceptance, it is far easier to take policy written from that POV and actually apply it to ML/AI.

I'd like to see something similar to the ACL2020 best paper "checklist", that creates a whole bunch of narrow statistical tests to validate certain traits of ML models. It'd be nice to have something like that, but for AI/ML ethics.

An openly available, standardized suite of narrow tests that your ML pipeline has to pass as an audit, before being certified as "compliant" and deployed into the web. That way, the regulators get to control the test, and the companies get to control their secrets. It also ensures a level playing field and each component can be changed without an explicit dependency on the other.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: