The ethics of Open Source
-
So would that suggest that morally you would be comfortable releasing code you knew could (and perhaps was) being used for Evil Purposes as long as you had a non enforceable "Don't use this for Evil Purposes" statement in the code? At a practical level an expensive, water tight legally binding license is just as enforceable as your note in the minds of many, so I guess it comes down to: Do you make a statement that has no teeth, or do you make a statement with teeth that will not really help the situation? which reduces down to Do you put the effort into a statement, knowing it will not actually help, or do you just mail it in? Which is really Do you put time and money into a statement as a statement unto itself, or just put a statement in so you can say you said "I told them not to" This stuff is hard.
cheers Chris Maunder
-
I effectively restrict my professional code to certain arenas, as I will not work for example, on weapons systems. I cannot control what my code under MIT license is used for however, and I produce a lot of that. If someone makes a missile guidance system with my JSON parser, well, I guess more power to them? I won't lose sleep over it, because I didn't make anything specifically for that purpose, and I don't feel morally or ethically obligated to control what other people do with my code. The other thing is - the people I would be least comfortable with using my code - bad actors in general, whether they used it to create malware or anything I else I disagreed with - aren't the type of people to respect license agreements in the first place, so there's that to consider as well. A long time ago I worked on productivity monitoring software for a workplace. I was in my twenties I wasn't considering how it was likely to be used. These days I wouldn't write such software because sadly, it's most likely going to be used to abuse employees. That's just how that software works when you go to squeeze every last bit of "productivity" out of someone's workday. Software micromanagement isn't much better than the meat based variety. I get the same heebie jeebies from AI. It's so easy to abuse AI. Want to sidestep its filters? Ask your question using ASCII art. Or tell it to write War and Peace using only the word "pudding". So even attempts to make it ethical don't work. LLMs are just not a "safe" technology - but then neither is the Internet, but also look what the Internet has done (the bad as well as the good). I understand the Internet, and I've worked with it for long enough to temper what I produce such that I'm not unleashing something terrible upon the world. I can't say the same of anything I'd produce using LLMs or the like. I'd sooner just avoid it, and let other people be the ones to screw up the planet with it.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
"tell it to write War and Peace using only the word "pudding". I'm staring and I'm staring and I'm staring at that sentence. Must...not...
cheers Chris Maunder
-
"tell it to write War and Peace using only the word "pudding". I'm staring and I'm staring and I'm staring at that sentence. Must...not...
cheers Chris Maunder
Researchers were doing things like that to get it to start dumping its training data at them. :)
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
The definition of Open Source, by the OSI, has a clause that I feel is an issue today.
Quote:
6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
I would argue that the most important issue facing development today is the ethics around AI. Beyond even the economic upheavals going on where entire job titles are disappearing in the same way that the printing press caused a fair bit of gnashing of teeth. AI provides the world with a weapon more dangerous than a gun in a more convenient and cheaper package. We all know that laws are woefully slow to keep up with even the previous pace of IT innovation, and AI has leapt forward so fast that the catch up will take years or decades. The OSI specifically says that if you want their badge on your software you cannot say to someone: "with this code, do no harm". You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones. This isn't a commentary on the rights and wrongs of writing software. A knife can save a life or take a life: we need them, and so too with software. What I'm concerned about is whether, after 40 years, the blessing of the Open Source badge makes ethical sense. AI provides an escape hatch here where the code can remain Free (as in freedom) but the models are subject to ethical constraints imposed by the owners (or collectors) of the data. To me this won't work because it's like saying the gun is safe because one of the types of bullets it uses is banned (but the other 9 are on Amazon next-day). So what do you guys think? Let's ignore the practical difficulties of ever enforcing a restriction on code use, as well as the difficulty in defining "ethical" in a way that covers every culture, society and time. What is more important to you, as the developer of code you want to share with the World: 1. That the code is always able to be used for anything, without constraint 2. That you have the ability to restrict the use of your code based on ethical concerns
cheers Chris Maunder
1 because I'm not here to police others and I don't want anyone having to think to hard about if they can use my opensource stuff. It's why my license of choice for my software is bsd-3-clause, basically saying you can do what you want with it, except take credit for it, and if it breaks, you can keep all the pieces.
------------------------------------------------ If you say that getting the money is the most important thing You will spend your life completely wasting your time You will be doing things you don't like doing In order to go on living That is, to go on doing things you don't like doing Which is stupid. - Alan Watts https://www.youtube.com/watch?v=-gXTZM\_uPMY
-
The definition of Open Source, by the OSI, has a clause that I feel is an issue today.
Quote:
6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
I would argue that the most important issue facing development today is the ethics around AI. Beyond even the economic upheavals going on where entire job titles are disappearing in the same way that the printing press caused a fair bit of gnashing of teeth. AI provides the world with a weapon more dangerous than a gun in a more convenient and cheaper package. We all know that laws are woefully slow to keep up with even the previous pace of IT innovation, and AI has leapt forward so fast that the catch up will take years or decades. The OSI specifically says that if you want their badge on your software you cannot say to someone: "with this code, do no harm". You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones. This isn't a commentary on the rights and wrongs of writing software. A knife can save a life or take a life: we need them, and so too with software. What I'm concerned about is whether, after 40 years, the blessing of the Open Source badge makes ethical sense. AI provides an escape hatch here where the code can remain Free (as in freedom) but the models are subject to ethical constraints imposed by the owners (or collectors) of the data. To me this won't work because it's like saying the gun is safe because one of the types of bullets it uses is banned (but the other 9 are on Amazon next-day). So what do you guys think? Let's ignore the practical difficulties of ever enforcing a restriction on code use, as well as the difficulty in defining "ethical" in a way that covers every culture, society and time. What is more important to you, as the developer of code you want to share with the World: 1. That the code is always able to be used for anything, without constraint 2. That you have the ability to restrict the use of your code based on ethical concerns
cheers Chris Maunder
For me definitely the latter: That you have the ability to restrict the use of your code based on ethical concerns I wrote a webcam movement detection security surveillance system around 12 years ago, which gets occasional updates and which is open source. I stated in the EULA that the software is not to be used to secretly monitor individuals but is only for security or nature watching purposes - knowing that some people would probably not pay any attention to that, but at least giving some them time to pause and think about what they might be about to do. Yes, I am one of those who thinks that ethics is important and needs open discussion with regards to computer systems. Although I don't think there was a single lecture on ethics on my computer science course when I was in back in university in 1988.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
-
For me definitely the latter: That you have the ability to restrict the use of your code based on ethical concerns I wrote a webcam movement detection security surveillance system around 12 years ago, which gets occasional updates and which is open source. I stated in the EULA that the software is not to be used to secretly monitor individuals but is only for security or nature watching purposes - knowing that some people would probably not pay any attention to that, but at least giving some them time to pause and think about what they might be about to do. Yes, I am one of those who thinks that ethics is important and needs open discussion with regards to computer systems. Although I don't think there was a single lecture on ethics on my computer science course when I was in back in university in 1988.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
I've avoided creating certain types of projects altogether in large part because there are no practical ethics adopted by the field, so there are no guardrails. "Just because you can, doesn't necessarily mean you should" looms large in my consideration and I tend to think about the possibilities of my software. I don't want my code to be weaponized against people or used to abuse them. That means certain projects are simply a no go for me. I don't do law enforcement (with one notable exception where I made something to keep K9 units alive and safe in the car) and I don't do military. I don't do productivity monitoring or other kinds of surveillance, excepting things like closed circuit security systems, which I think are fine. If there was some sort of ethical standards we all adopted it would give me more freedom to work in arenas I just bow out of entirely, as above.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
I've avoided creating certain types of projects altogether in large part because there are no practical ethics adopted by the field, so there are no guardrails. "Just because you can, doesn't necessarily mean you should" looms large in my consideration and I tend to think about the possibilities of my software. I don't want my code to be weaponized against people or used to abuse them. That means certain projects are simply a no go for me. I don't do law enforcement (with one notable exception where I made something to keep K9 units alive and safe in the car) and I don't do military. I don't do productivity monitoring or other kinds of surveillance, excepting things like closed circuit security systems, which I think are fine. If there was some sort of ethical standards we all adopted it would give me more freedom to work in arenas I just bow out of entirely, as above.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
honey the codewitch wrote:
"Just because you can, doesn't necessarily mean you should"
I am in complete agreement and it reminds me of the previous spate of "how do I hack a website" questions on the CodeProject forums where the questioner would be sent packing fairly quickly. When I started writing the webcam security app there was not much out there that did the same and it was only around 5+ years later that the surveillance/snooping apps/IOT devices really exploded. I think the thing about ethics is that there are a lot of grey areas and, as a massive generalisation, software developers don't tend to like grey areas which may be why the topic of ethics, from a philosophical and moral point of view, is not spoken about a lot in the community.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
-
honey the codewitch wrote:
"Just because you can, doesn't necessarily mean you should"
I am in complete agreement and it reminds me of the previous spate of "how do I hack a website" questions on the CodeProject forums where the questioner would be sent packing fairly quickly. When I started writing the webcam security app there was not much out there that did the same and it was only around 5+ years later that the surveillance/snooping apps/IOT devices really exploded. I think the thing about ethics is that there are a lot of grey areas and, as a massive generalisation, software developers don't tend to like grey areas which may be why the topic of ethics, from a philosophical and moral point of view, is not spoken about a lot in the community.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
I'm actually pretty comfortable navigating grey areas, and interested in the softer, less concrete areas of life. Philosophy is interesting, and moral arguments and debate have value to us collectively. I can see that. I mean fuzzy stuff is fine, as long as I can eek out parameters within the fuzziness that I can live with. The K9 project I mentioned is an example of me making an exception to a hard and fast ethical commitment that I made, without actually violating the spirit of it. That's fine by me. But I am not most developers. I'm a little weird, even among the weird. I used to sort of categorize developers based on two major approaches to development, and to some degree, life. One type of developer approach - the most common sort, IME - is methodical and likes to deal in the very tangible and concrete. They tend to produce solid software, and things like TDD even appeal to them sometimes. These are the sort that I think you were alluding to when you said devs don't like grey areas. The other type of developer approach - a bit less common - is creative, but less methodical. If they were a buddhist, they'd appreciate buddhism, but also blow snot on their robes. Translate that sort of nothing-is-sacred, everything is up to interpretation, to code. I don't want these people testing software, and too many make a project rudderless, but having one or two a team keeps the team thinking around corners, solving problems where creativity is required. These developers can often work with the fuzzy stuff. I tend to fall into the latter category.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
Not Open Source, but AI ethics related. Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods | Ars Technica[^]
-Sean ---- Fire Nuts
This kind of reminds me of the Lars Ulrich freakout over torrents and the basic availability of music that the Internet gave rise to. It didn't break Metallica. Nor the music industry. But it did change it. The days of major record labels dictating who is popular are over. That's the good. The bad is obviously record sales, but artists (at least the ones I follow) have bridged the gap with more live shows. And I think the AI thing will bake out similarly. People aren't going to pay to see AI perform (except maybe Captured By Robots fans). And knockoff tracks I think will still mostly be comedic or otherwise unserious, like the Johnny Cash cover of the Barbie song on youtube. So I think these artists maybe - through misunderstanding and common fear of tech - are overblowing the situation. That's just my opinion though - I have no crystal ball.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
Not Open Source, but AI ethics related. Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods | Ars Technica[^]
-Sean ---- Fire Nuts
From the Ars Technica comments: I’m a Luddite (and So Can You!) | The Nib[^] Two quotes from the comic that struck me: > [William Morris] wanted people to take pleasure in their work rather than "mere toiling to live, that we may live to toil" and, from the final frame: > Questioning and resisting the worst excesses of technology isn't antithetical to progress. If your concept of "progress" doesn't put people at the center of it, is it even progress? In that spirit of questioning: AI is obviously a further iteration of the industrial revolution, with all the disruption that that entails, but is AI really all there is to human intelligence? We shouldn't underestimate ourselves.
-
From the Ars Technica comments: I’m a Luddite (and So Can You!) | The Nib[^] Two quotes from the comic that struck me: > [William Morris] wanted people to take pleasure in their work rather than "mere toiling to live, that we may live to toil" and, from the final frame: > Questioning and resisting the worst excesses of technology isn't antithetical to progress. If your concept of "progress" doesn't put people at the center of it, is it even progress? In that spirit of questioning: AI is obviously a further iteration of the industrial revolution, with all the disruption that that entails, but is AI really all there is to human intelligence? We shouldn't underestimate ourselves.
I agree that AI is an extension of the industrial revolution. But while the industrial revolution brought progress, it also generated a lot of unethical corporate behavior that eventually was regulated out of existence. I suspect the same will be happening to AI.
-Sean ---- Fire Nuts
-
This kind of reminds me of the Lars Ulrich freakout over torrents and the basic availability of music that the Internet gave rise to. It didn't break Metallica. Nor the music industry. But it did change it. The days of major record labels dictating who is popular are over. That's the good. The bad is obviously record sales, but artists (at least the ones I follow) have bridged the gap with more live shows. And I think the AI thing will bake out similarly. People aren't going to pay to see AI perform (except maybe Captured By Robots fans). And knockoff tracks I think will still mostly be comedic or otherwise unserious, like the Johnny Cash cover of the Barbie song on youtube. So I think these artists maybe - through misunderstanding and common fear of tech - are overblowing the situation. That's just my opinion though - I have no crystal ball.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
honey the codewitch wrote:
So I think these artists maybe - through misunderstanding and common fear of tech - are overblowing the situation.
Quite possibly, but the next 10-15 years are not going to be pretty until the dust settles.
-Sean ---- Fire Nuts
-
honey the codewitch wrote:
So I think these artists maybe - through misunderstanding and common fear of tech - are overblowing the situation.
Quite possibly, but the next 10-15 years are not going to be pretty until the dust settles.
-Sean ---- Fire Nuts
I certainly agree with that.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix
-
The definition of Open Source, by the OSI, has a clause that I feel is an issue today.
Quote:
6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
I would argue that the most important issue facing development today is the ethics around AI. Beyond even the economic upheavals going on where entire job titles are disappearing in the same way that the printing press caused a fair bit of gnashing of teeth. AI provides the world with a weapon more dangerous than a gun in a more convenient and cheaper package. We all know that laws are woefully slow to keep up with even the previous pace of IT innovation, and AI has leapt forward so fast that the catch up will take years or decades. The OSI specifically says that if you want their badge on your software you cannot say to someone: "with this code, do no harm". You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones. This isn't a commentary on the rights and wrongs of writing software. A knife can save a life or take a life: we need them, and so too with software. What I'm concerned about is whether, after 40 years, the blessing of the Open Source badge makes ethical sense. AI provides an escape hatch here where the code can remain Free (as in freedom) but the models are subject to ethical constraints imposed by the owners (or collectors) of the data. To me this won't work because it's like saying the gun is safe because one of the types of bullets it uses is banned (but the other 9 are on Amazon next-day). So what do you guys think? Let's ignore the practical difficulties of ever enforcing a restriction on code use, as well as the difficulty in defining "ethical" in a way that covers every culture, society and time. What is more important to you, as the developer of code you want to share with the World: 1. That the code is always able to be used for anything, without constraint 2. That you have the ability to restrict the use of your code based on ethical concerns
cheers Chris Maunder
Code I've written myself and make publicly available would have no restrictions, given that restrictions (even via licensure) are unenforceable. This is especially true for those interested in causing harm aka 'bad actors'. Code that I knew could be integral to something that caused harm I would not publish in the first place.
Software Zen:
delete this;
-
I agree that AI is an extension of the industrial revolution. But while the industrial revolution brought progress, it also generated a lot of unethical corporate behavior that eventually was regulated out of existence. I suspect the same will be happening to AI.
-Sean ---- Fire Nuts
-
The definition of Open Source, by the OSI, has a clause that I feel is an issue today.
Quote:
6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
I would argue that the most important issue facing development today is the ethics around AI. Beyond even the economic upheavals going on where entire job titles are disappearing in the same way that the printing press caused a fair bit of gnashing of teeth. AI provides the world with a weapon more dangerous than a gun in a more convenient and cheaper package. We all know that laws are woefully slow to keep up with even the previous pace of IT innovation, and AI has leapt forward so fast that the catch up will take years or decades. The OSI specifically says that if you want their badge on your software you cannot say to someone: "with this code, do no harm". You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones. This isn't a commentary on the rights and wrongs of writing software. A knife can save a life or take a life: we need them, and so too with software. What I'm concerned about is whether, after 40 years, the blessing of the Open Source badge makes ethical sense. AI provides an escape hatch here where the code can remain Free (as in freedom) but the models are subject to ethical constraints imposed by the owners (or collectors) of the data. To me this won't work because it's like saying the gun is safe because one of the types of bullets it uses is banned (but the other 9 are on Amazon next-day). So what do you guys think? Let's ignore the practical difficulties of ever enforcing a restriction on code use, as well as the difficulty in defining "ethical" in a way that covers every culture, society and time. What is more important to you, as the developer of code you want to share with the World: 1. That the code is always able to be used for anything, without constraint 2. That you have the ability to restrict the use of your code based on ethical concerns
cheers Chris Maunder
I may be a bit more radical about this but I'd say "who cares about OSI?" Better yet a simple "seal of approval"? I certainly don't. Ever. Whenever I provide code, it is free of moral judgements as to what will be done with it. Whenever such concerns arise, I'd rather not share the code and maybe the app too, or even better I just don't even start coding at all (at least pet projects) because once the cat is out of the box the morals of whoever uses the code cannot be guaranteed.
- Leonardo
-
The definition of Open Source, by the OSI, has a clause that I feel is an issue today.
Quote:
6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research.
I would argue that the most important issue facing development today is the ethics around AI. Beyond even the economic upheavals going on where entire job titles are disappearing in the same way that the printing press caused a fair bit of gnashing of teeth. AI provides the world with a weapon more dangerous than a gun in a more convenient and cheaper package. We all know that laws are woefully slow to keep up with even the previous pace of IT innovation, and AI has leapt forward so fast that the catch up will take years or decades. The OSI specifically says that if you want their badge on your software you cannot say to someone: "with this code, do no harm". You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones. This isn't a commentary on the rights and wrongs of writing software. A knife can save a life or take a life: we need them, and so too with software. What I'm concerned about is whether, after 40 years, the blessing of the Open Source badge makes ethical sense. AI provides an escape hatch here where the code can remain Free (as in freedom) but the models are subject to ethical constraints imposed by the owners (or collectors) of the data. To me this won't work because it's like saying the gun is safe because one of the types of bullets it uses is banned (but the other 9 are on Amazon next-day). So what do you guys think? Let's ignore the practical difficulties of ever enforcing a restriction on code use, as well as the difficulty in defining "ethical" in a way that covers every culture, society and time. What is more important to you, as the developer of code you want to share with the World: 1. That the code is always able to be used for anything, without constraint 2. That you have the ability to restrict the use of your code based on ethical concerns
cheers Chris Maunder
I for one, would rather not have code tied up in ethics discussions. I disagree with your statement "You have to explicitly be OK with someone using your AI creation to harm kids, to destroy lives, to create scams, to automate cyberbullying, to impersonate loved ones". That's like loaning your car to your teenage relative, and saying it's equivalent to agreeing to their use of the vehicle to break road laws, do burnouts, and conduct ram raids. I don't think that's the case. As usual, things are not binary "good" or "bad" but but more subtle. Often, restrictions on use serve to restrict use for the "good guys". The bad people really don't care what you think. Don't want your software used for harmful purposes, define harm. One of the most heinous people of the Nazi regime discovered an efficient means to create nitrate fertilisers. (look it up). Without this discovery, it might be difficult to feed the population of the earth at this point in time. I'm sure he thought he was doing the right thing. Oppenheimer and the Manhattan project created nuclear weapons - was that the right thing? It depends on your perspective. Or rather their perspective - which you don't have a lot of control over. . . . and then who fundamentally decides. Is it you personally, or a self imposed restriction by the end users, assuming they even read the license. While trying to control who uses your software is well meaning. In practise, I think you can only do it where you explicitly allow access through a license. Once the software is "open" - it's open.