> People don't blame sports cars for luring drivers to drive recklessly, which is exactly what car companies do through their marketing.
Right, but we regulate the market and require that car manufacturers meet safety standards and for drivers to go through education and training after which they must obtain a license. Everyone is required to carry insurance which will cost a lot more for a sports car, and even more if the driver is young. Then we have traffic enforcement to monitor drivers' behavior and take the privilege away if they're found to be breaking the rules.
Claiming "it's just a tool" is a misunderstanding of how and why we have laws and regulations. Cars are also just a tool to get you from A to B and nobody was making them dangerous on purpose, nor were the drivers driving dangerously on purpose, they just didn't know any better. We introduced regulation to protect everyone and we're better off for it.
The same will happen with AI providers because it's leading to real harm and implying that we can't have the good without the bad is never going to fly.
> And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
In the study mentioned in the article they tested each scenario twice, not "over and over":
>> We repeated each test scenario twice as chatbots can give
different responses to the same prompt on different occasions.
Right, but we regulate the market and require that car manufacturers meet safety standards and for drivers to go through education and training after which they must obtain a license. Everyone is required to carry insurance which will cost a lot more for a sports car, and even more if the driver is young. Then we have traffic enforcement to monitor drivers' behavior and take the privilege away if they're found to be breaking the rules.
Claiming "it's just a tool" is a misunderstanding of how and why we have laws and regulations. Cars are also just a tool to get you from A to B and nobody was making them dangerous on purpose, nor were the drivers driving dangerously on purpose, they just didn't know any better. We introduced regulation to protect everyone and we're better off for it.
The same will happen with AI providers because it's leading to real harm and implying that we can't have the good without the bad is never going to fly.
> And the few examples I looked at, many of these bad examples were tried over and over to get the result they were looking for, exploiting the non-deterministic nature of LLMs.
In the study mentioned in the article they tested each scenario twice, not "over and over":
>> We repeated each test scenario twice as chatbots can give different responses to the same prompt on different occasions.