<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1678611822423757&amp;ev=PageView&amp;noscript=1">
Defrag This

| Read. Reflect. Reboot.

AI Ethics 101: Latest Trends and Concerns for Intelligent Systems

Michael O'Dwyer| October 28 2016

| IT insights

Old metal robots toys army

At the time of writing, AI ethics is a hot topic, given that several global players (namely Google, Amazon, Facebook, IBM and Microsoft) have founded the Partnership on Artificial Intelligence to Benefit People and Society. Their aims are to advance public awareness and define standards that researchers can use as guidelines. You have to wonder at the future benefits of this organization when the chosen name sounds like a company in Southeast Asia. Apple is conspicuously absent, perhaps unwilling to participate in any venture where the supply chain is not under its full control or a profit margin is not defined.

However, there are other efforts underway. The British Standards Institution has released BS8611 — a guide to the ethical design and application of robots and robotic systems. According to the International Business Times, the standard is aimed at human safety and based on Isaac Asimov's three robotic laws. For the moment, the standard focuses on the areas of personal care, industrial and medical applications. This is sure to change as more devices become "smart" and evolving standards will likely preempt the rise of domestic appliances to overcome their human oppressors.

What drives today's concerns with AI ethics? What are the current barriers to ubiquitous AI adoption? How can AI evolve ethically?

AI Reality

Despite all the hype, AI is very much in its infancy. The term artificial intelligence itself is a misnomer, as autonomous decision making is not yet a reality and an "artificial brain" that compares favorably with its human counterpart is a long way off.

Instead, what we have today involves predictive data and pattern analysis, driven by evolving algorithms provided by data scientists. With the cloud, swarm intelligence and high-performance computing (HPC), more data is gathered for analysis and an action is performed depending on the results. Where AI ethics becomes relevant is in defining how much human control is necessary to protect human interests, whether these lie in human safety (as in Asimov's laws) or privacy (the prevention of an Orwellian future as depicted in "1984").

Related Article: Intelligent Systems: The Rise of the Machines Has Begun

We do not have to worry about anything becoming "self-aware" and destroying the human race. Robots in movies such as "Terminator," "RoboCop" (the enforcement droids, not the main character) and "I, Robot", and on TV (Data in "Star Trek: TNG," for example) are firmly in the realms of science fiction. Our concerns are much simpler for now.

Artificial Stupidity (AS)?

While it is true that AI is advancing for a variety of reasons including but not limited to sensors, increased processing power, robotics, machine learning, data science and engineering, the biggest stumbling block remains unsolved.

"The biggest weakness of any AI is ... humans. The 'creators' will introduce the bugs in the same way they introduce them to the latest version of Microsoft Office," said Vaclav Vincalek, president of Pacific Coast Information Systems, a Vancouver-based provider of strategic IT consulting services.

Citing two specific instances of AI gone wrong, Vincalek claims that we should be afraid of AI that will work as "designed" or "expected." In the first instance, UK-based currency traders in the British pound awoke to find that a drop of six percent happened overnight, with automated trading algorithms the suspect cause.

Secondly, a story on Forbes confirmed the security weaknesses of AI when Apple's Siri opened the owner's front door to a neighbor who had simply shouted "Siri, open the door." Hardly high-tech hacking, is it?

"An additional danger is that the algorithms are more complex; and any ability to troubleshoot and identify a problem is getting more difficult as the algorithm adds new conditions," said Vincalek.

Other Concerns

In terms of cybersecurity, "AI works better with more data. The more private it is, the better it can profile somebody and predict what users' needs are. The risk is that companies managing the AI will do whatever it takes to get their hands on the information," said Sorin Mustaca, CSSLP, Security+, Project+, an independent IT security consultant. If an AI is used to protect some assets, without the proper control mechanisms, it might do whatever is necessary, said Mustaca, adding that this might hurt others or cause damage.

According to Vincalek, widespread adoption is prevented not by infrastructure limitations but by staff shortages in the specialized areas necessary to advance AI. Mustaca has other concerns. "What worries me is that corporations creating AIs will program them to work for their financial or strategical advantage. When this happens, things go (very) wrong and we face situations where we cannot determine if the damage was caused by the AI or by the humans that programmed it."

Clearly, AI usage has its pros and cons, with automation and process improvement key benefits in several industries. Cons include lack of transparency on data harvested and potential for misuse. The onus is on AI creators to conform to data privacy and security requirements and, as Mustaca points out, organizations must only use AI in smaller, clearly defined areas where they need only partial information to achieve their goals. For example, the use of personally identifiable information (PII) is a trigger for many of us. In addition, Vincalek is convinced that without proper testing of all AI systems, we enable decision-making with far-reaching consequences and without man-in-the-middle fail-safes involving actual human monitoring.

How much control are you willing to give AI, when AI ethics have yet to be defined in any official manner? Shouldn't ethics standards have been finalized prior to data analytics and AI? I believe so.

Topics: IT insights

Leave a Reply

Your email address will not be published. Required fields are marked *

THIS POST WAS WRITTEN BY Michael O'Dwyer

An Irishman based in Hong Kong, Michael O’Dwyer is a business & technology journalist, independent consultant and writer who specializes in writing for enterprise, small business and IT audiences. With 20+ years of experience in everything from IT and electronic component-level failure analysis to process improvement and supply chains (and an in-depth knowledge of Klingon,) Michael is a sought-after writer whose quality sources, deep research and quirky sense of humor ensures he’s welcome in high-profile publications such as The Street and Fortune 100 IT portals.

Free Trials

Getting started has never been easier. Download a trial today.

Download Free Trials

Contact Us

Let us know how we can help you. Focus on what matters. 

Send us a note

Subscribe to our Blog

Let’s stay in touch! Register to receive our blog updates.