The threats and opportunities presented by algorithmic entities

Almost as certain as the fact that humans will always continue to invent new technologies is the fact that those very same humans will be skeptical of the implications of those technologies. This is nothing new. As far back as 1492, just after the printing press was invented, a monk wrote disparagingly of the new... Read more »

Almost as certain as the fact that humans will always continue to invent new technologies is the fact that those very same humans will be skeptical of the implications of those technologies. This is nothing new. As far back as 1492, just after the printing press was invented, a monk wrote disparagingly of the new invention: “The word written on parchment will last a thousand years … The most you can expect a book of paper to survive is two hundred years.”

Sometimes the skepticism morphs into full-blown fear — presently a lot of that fear is particularly concentrated around AI. Damning prognoses have been expressed by entrepreneurs like Elon Musk and Bill Gates as well as academics like Stephen Hawking.

Yes, some of it is hyperbole. But some of it might not be. Take the case of algorithmic entities — the idea that we could give algorithms legal autonomy, separate from human beings. This concept could give rise to “autonomous” online businesses that function without the intervention of humans — accepting payments and generally transacting with human and non-human agents alike.

Technologist, Professor Shawn Bayern, warned in 2014 that this kind of autonomy would be exploited by criminal, terrorist and other anti-social forces in society. By giving computers legal identities, it becomes possible for nefarious actors to conceal their own identities as they participate in commerce and accumulate wealth.

There are a few aspects of corporate law that make this development especially vulnerable to abuse. For one it is possible for algorithms to exercise exclusive control of the vast majority of entity forms in most countries. Secondly, entities can move between regulatory regimes quickly and easily through migration. Thirdly, many governments lack the ability to ascertain who controls entities that have been chartered. The combined effect of these factors makes it almost impossible to properly regulate them. Bottom line, the likelihood that criminal elements would make use of algorithmic entities is high.

Is it all bad?

It must be said that many of the fears are predicated on the idea that AIs would act in the same way that corporations presently do — that they would be myopically devoted to their own self-interest, at the expense of the environment and yes, human rights. On the one hand this view ignores the fact that AI need not be a sci-fi nightmare if we consider carefully the initial programming of the entities — planning in fail-safes. On the other it exposes the fact that many corporations already behave in destructive ways described above — comparable to anything Asimov could imagine.

Furthermore, the creation of algorithmic entities does not necessarily mean an end to transparency. It may in fact lead to an increase in expert influence. For instance, the Bank of International Settlements currently provides independent research to central banks around the world regarding the liquidity in the financial system, and in so doing it has been able to reduce monetary policy volatility better than individual governments could on their own. This is just a light shade of what would be possible if algorithmic entities were able to act beyond borders — free from political and social biases. Complex global concerns like immigration regulation and developing resilience to climate change will require sophisticated ways of thinking to solve. Could algorithms be the solution?

Of course, your answer to that question probably says more about your biases than it does about reality. In general, we humans are slightly ill-equipped to really think through all the possibilities presented by algorithmic entities. At the end of the day all of our fears are mere extrapolations of known problems that we have already seen in the world. But this knowledge is inadequate to frame the question because AI takes us to realms of possibilities we cannot fathom given that the complexity of thinking that is possible through AI is currently beyond our reach.

And that is where the debate gets fuzzy. If we develop a system (or an algorithmic entity) that has some kind of predicted or unpredictable consequence, we will have no-one (or nothing) to hold accountable. That means that our justice system becomes impotent in correcting mistakes.  And once that happens we have immobilized our own immune system.

Conclusion? For the first time in history, our skepticism may be wholly appropriate.