There is growing concern and discussion about the need for caution in business deployments of artificial intelligence so that the machines will not pursue activities nor use information in harmful or unethical ways. The objective of putting ethical bounds on the behavior of self-strategizing machines is confounding and broad. That is the problem that the technology itself presents. But another problem is peoples’ urge to ‘just find some way to forge ahead’ with these deployments.
This, I feel, has even caused some of the discussions on the ethics question to back-off on the real tough issues and suggest specious and vague courses of action just so that the question of “should we hold off on deployments ?” is not tabled. For example, this article
quotes one consultant who says that people must understand the consequences of AI, that organizations should think about how they are utilizing data, and how their customers are affected by learning machines poring their way through it. In a subsequent paragraph a computer science lecturer basically negates the ability for business leaders to even know these things with the statement: “We don’t know how an AI system is going to evolve and how it will influence future decision-making”. Did the author and whoever he works under at ZDNet really miss this incongruity? If so, they are certainly not exceptionally capable journalists.
The experience of ethical misconduct in businesses like Enron or Bernie Madoff’s “investment fund” evinces the obvious: that ethical constraint is not properly self-imposed to begin with. To suggest that ethics committees setup in unscrupulously-run organizations are more than just dog-and-pony shows is insultingly naive and must be dismissed. As an employee, I’ve had to participate in mandatory ‘ethics training’ and certification, and 90% of it was a joke (a dog-and-pony). I did learn more precisely where the line is drawn on some conduct questions (the other 10%). But your ethics is elemental to your own character; it is either a fundamental part of who you are or its not. An unethical person taking the same training only comes away knowing better what it is that he can get away with and what it is that he wont be able to. In businesses, the morality and ethical quality of the leadership sets the tone for the other players to follow, whether they be people or machines.
Real ethics enforcement requires third-party inspection and oversight (e.g. the SEC in the securities trading business), usually backed-up with defined laws and requirements. And the simple fact of the matter is that drawing such well-defined legal boundaries around all the manifold activity that an autonomously-strategizing machine could possibly come up with is simply impossible. It looks like these machines will be deployed at a growing pace nonetheless, so we will just have to wait to see what mishaps and damage eventually occur. At that point, it will be the liability injunctions that follow that determine the bounds enacted on autonomous machines’ ‘license to scheme’ – since no a-priori bounds on AI are being imposed now through proceeding more slowly and cautiously, they will be imposed ex-post when damages occur later. The people and organizations that unleash autonomous bots must, de-facto, be held responsible for any harm that their machines end-up causing.
If that one stricture were written explicitly into present law and a hard-nosed intent to enforce it made widely known, it would provide the most useful and encompassing speed brake on the rush currently underway to forge-ahead with open-ended business AI.