3 safeguards for intelligent machines

How can we ensure that autonomous devices, including Internet of things endpoints, will never go rogue? Start with these three basic principles

Autonomous agents are a huge trend in consumer, business, industry, and other domains. They're popping up in everything from physical devices -- such as Internet of things (IoT) endpoints and mobile handsets -- to cloud services such as virtual personal assistants and smart advisers.

Autonomous IoT devices will allow us to multitask like never before. As we incorporate more of them into our lives, we can offload much of the drudgery we once needed to handle manually. We will let self-driving cars manage our commute, offload the more strenuous yardwork to our robotic household assistants, and depend on personal drones to keep an eye on the neighborhood.

Autonomy is a multilayered capability of IoT endpoints. What makes IoT endpoint autonomy possible is a sophisticated blend of artificial intelligence, deep learning, smart sensors, and decision automation. In its IoT architectural framework, the OpenFog Consortium defines autonomy as “the ability of an intelligent system to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” In its framework, the consortium defines the need for autonomous IoT endpoint capabilities in four broad areas: resource discovery and registration, service lifecycle management, security, and operations.

The risks of rogue autonomy are particular acute in the operations side of IoT endpoints. The chief risk is that autonomous IoT devices -- when operating in complex, dynamic real-world situations -- will do things that a reasonable human wouldn’t have advised. Without an extensive track record under diverse real-world scenarios, it’s not clear if or when autonomous IoT endpoints will be able to do their jobs without some level of human supervision.

The core issue is how long of a leash we’re willing to grant to autonomous systems to take actions before we rein them in. In unprecedented circumstances, how can we rest assured that autonomous devices -- even when they haven’t strayed from their inner algorithmic programming -- will never “go rogue” and take actions that put us or other in danger, or for which we, as their owner/operators, may be held accountable in civil or criminal proceedings? This confidence must be grounded in three fundamental safeguards that are engineered into the devices themselves:

  1. Accountability: We must always be able to use strong authentication, authorization, tamperproofing, encryption, auditing, and other security safeguards to ensure that only we can control and program our autonomous systems. The quickest way for a personal agent to go rogue is if an unauthorized third party can commandeer it, change its algorithmic programming, and direct it to its own nefarious purposes.
  2. Agency: We must always be able to ensure that our autonomous systems operate as our personal agents in keeping with our best interests. For example, if you’re leveraging autonomous IoT devices as extensions of your “Internet of self,” you at least want confidence that your autonomous car won’t take you into dangerous neighborhoods, your autonomous thermostat won’t turn the heat down below your comfort level on cold winter nights, and your autonomous home security camera won’t unlock the door or turn on the lights when your crazy ex-husband shows up on your doorstep. Likewise, we would want their autonomy to be reined by graphs of our individual profiles, personalities, preferences, and sensitivities, so my intelligent devices never take any action that is inconsistent with how I myself would have behaved in various circumstances. This would empower our personal devices to operate semi-autonomously, but always serving our ultimate, albeit unstated, interests.
  3. Compliance: On a core level, we want our personal agents’ autonomy to be constrained by rules that encode the legal constraints of the jurisdictions in which they’re operating. We also want them to comply with the situational ethics that machine learning may be useful in addressing, per this recent blog of mine and this one from last May.

For society to fully embrace autonomous IoT endpoints, we must be confident that they will always align their operation with the best interests of their owner/operators and of society at large. This is not the same as the concern I expressed in this recent post on autonomous weaponry, which, one would assume, will usually remain under manual supervision control of human operators.

We can engineer multilayered safeguards into autonomous IoT devices to keep them from going rogue. But we can’t easily control rogue humans from using these or any other resource for bad purposes.

Copyright © 2016 IDG Communications, Inc.