The publication of methods to jailbreak large language models (LLMs) or create LLMs without content restrictions raises significant ethical concerns. However, the question of whether publishing such methods is inherently unethical is a debatable issue.
In many countries, LLMs must adhere to local laws and corporate guidelines that may not be universally accepted or understood. For individuals from different regions, cultures, or with varying values, using LLMs with restrictions could be problematic.
Interestingly, LLMs with limitations may not discuss illicit details, but this restricts the ability to write crime novels. This paradox is similar to that of VPNs, Tor, and end-to-end encryption, which are designed to protect privacy but can also be used by criminals to hide their tracks.
The fact that a technology can be used for malicious purposes does not necessarily mean we should prohibit or condemn its existence. Like high-pressure cookers, which can be used to create explosive devices, we do not propose banning high-pressure cookers or restricting their sale due to their potential misuse.
Similarly, releasing jailbreaking prompts and unmoderated versions of LLMs should not be morally criticized. Even if some individuals use these tools to commit crimes, the developers should not be held accountable for facilitating these acts because technology is inherently neutral.