I for one, would rather not have code tied up in ethics discussions. I disagree with your statement "You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones". That's like loaning your car to your teenage relative, and saying it's equivalent to agreeing to their use of the vehicle to break road laws, do burnouts, and conduct ram raids. I don't think that's the case. As usual, things are not binary "good" or "bad" but but more subtle. Often, restrictions on use serve to restrict use for the "good guys". The bad people really don't care what you think. Don't want your software used for harmful purposes, define harm. One of the most heinous people of the Nazi regime discovered an efficient means to create nitrate fertilisers. (look it up). Without this discovery, it might be difficult to feed the population of the earth at this point in time. I'm sure he thought he was doing the right thing. Oppenheimer and the Manhattan project created nuclear weapons - was that the right thing? It depends on your perspective. Or rather their perspective - which you don't have a lot of control over. . . . and then who fundamentally decides. Is it you personally, or a self imposed restriction by the end users, assuming they even read the license. While trying to control who uses your software is well meaning. In practise, I think you can only do it where you explicitly allow access through a license. Once the software is "open" - it's open.