Big Brother is watching you

Image

“As technological revolutions increase their social impact, ethical problems increase.”Moor’s observation has several implications for the Smart Home as we know it. While autonomous technology is developing quickly, that is, technology that is able to ‘adapt, learn and make decisions’ (The Royal Academy of Engineering, 2009), our society is faced with several social, legal and ethical issues. This blog is aimed at providing an overview and some implications faced by those issues.

Probably the clearest way to visualize the impacts of Smart Home technology on society or the people living within a Smart Home is by considering the case of elderly assistance or medical supervision via technology at your home place. In order for technology to act as a “Guardian Angel” it has to be ‘built and programmed to monitor and learn the daily behavior of the inhabitants in order to perform context analysis and to detect suspicious deviations from what can be considered to be normal […]’ (Zagler et al., 2008:3). Collected data might end up in the wrong hands and bring with it the threat of extreme surveillance (“Big Brother”) right into your home. This poses a significant issue on ethics as we need to consider: Who has the right over the data and decides what should happen to it? Would it be the subject, the monitoring person/system or the companies that run and maintain the technologies? (The Royal Academy of Engineering, 2009)

Smart Home technology is often seen as a means to automise a significant portion of daily decisions to be made in the home environment.Mäyär&Vadén (2004) call this substitution of human interaction by self-sustained technological implementations the “strong proactivity of technology”. While the technological revolution has not yet allowed for the complete substitutions of humans in the loop, they pose significant emphasis on the “weak proactivity of technology”, indicating that the real problems up to date lie in communication and control issues of technology which asks for the minimization and more effective supervision, while creating a comfortable awareness of the options of control and personalization.

A good example in this case may be a smart fridge: We literally transfer the control over our home supplies to a machine. While it may currently be able to tell you which food has gone off, or what you are running out of, it will soon be able to make orders for you based on your past behaviour and preferences. This may be seen as convenience, but for many it rather indicates a loss of control and autonomy in their home environment. While decisions and procedures are being standardized it may also come with a significant loss of personalization and control. The smart fridge may be easy to manually reprogram but the concept raises two further ethical questions: 1) Does the automation process lead to passivity and social isolation of the inhabitant?  2) May there be situations in which the machine may actually be better able to decide what should be done and be more trusted than an individual, meaning, should there be technologies that cannot be changed by individual influence?

Bierhoff et al. (2009) argue, that the objective of smart technology is to ‘provide tools and services that empower and enable people themselves to address their social, rational, and emotional needs’, supporting the view that shared control over systems is the only way to keep engaged. This social issue is supported by The Royal Academy of Engineering (2009), pointing out the possible social isolation faced especially by vulnerable individuals, as increased technical support may reduce social interactions being necessary. On the other hand it may be seen as enabling technology and an increase of autonomy from other people. This controversy needs to be addressed by ethical research and should not just be left to engineers to be decided upon. Another issue raised by The Royal Academy of Engineering, is the fact that “tricking” may occur. In such instances individuals may be tricked into believing of having social interactions by using familiar voices for smart technology that make them seem as “artificial companions”. As can be seen here and was indicated earlier, smart technologies require transparency and must be understood by its users in order to prevent such social issues from occurring.

What about technologies that cannot be changed by individual influence? The value of autonomous systems can be seen especially in situations where quick decisions are needed (The Royal Academy of Engineering, 2009). Situations like this may be similar to certain learned behaviours individuals develop in order to get out of dangerous situations (take for example a mother taking the child’s hand when crossing the road). Such behaviours are automated and not thought about. For the Smart Home, this may be the security system. Though, what happens in highly complex situations where human experience and judgment may be superior to programmed solutions? Should such automated technologies be able to be shut down?  While failures of technologies are not possible to be extinct entirely, should legislation stop the development of such technologies if their failure rate is minor compared to the one of individuals?

This brings me to my last point regarding the issues arising from Smart Home technology: Legislation. The fast advances in technological developments generate significant policy vacuums, whether they arise from data use and protection or insurance of autonomous technologies that need to be dealt with (The Royal Academy of Engineering, 2009). The first point was already pointed out at the very beginning of this essay, to better grasp the second however, let us consider Smart Cars. Driverless cars are currently developed by companies such as Google. While they seem to have fewer accidents than normal cars, what actually happens if the technology fails? Who will be accountable? Is it the ‘designer, manufacturer, programmer or user’ (The Royal Academy of Engineering, 2009:2)? The government and legislation are so far doing a poor job in being up to date with such important decisions and pose significant insecurity on users which may also decrease the use and adaption of smart technologies as the unproven status represents and unacceptable risk for its users: “Better a known devil than an unknown god” (Zagler et al., 2008:2).

In order to tackle some of the issues mentioned, ethical design of technology is of great importance. According to Zagler et al. (2008), data collection and processing should be carried out locally, and only be sent on if significant deviations of normal behaviour are analysed. They also encourage the design of technology that 1) is transparent, 2) makes the user the master and 3) fights laziness (relating to the notion of empowerment and passiveness). Another important solution pointed out by The Royal Academy of Engineering (2009) is the focus on the specific tasks that needs to be monitored/doneindicating that  onlydata actually needed should be collected. For example: In the medical service of a Smart Home, it may be enough to know that a person fell down and does not get up again, rather than knowing who the exact person is. Technology therefore should focus on the ability to extract the silhouette of a person while saving his/her identity.

Concluding, it is important to see that scientist, ethicists and legislation should all be working on this technological evolution together in order to ensure its ethical and most beneficial employment.

T.S.

References:

Bierhoff,  vanBerlo, Abascal, Allen, Civit, Fellbaum, Kemppainen, Bitterman, Freitas, Kristiansson (2009), ‘Towards and Inclusive Future: Impact and wider potential of information and communication technologies’, accessed: 13.02.2014, available at: http://www.tiresias.org/cost219ter/inclusive_future/inclusive_future_ch3.htm

Mäyär&Vadén (2004), ‘Ethics of Living Technology: Design and Principles of Proactive Home Environment’, accessed: 13.02.2014, available at: http://etjanst.hb.se/bhs/ith/2-7/fmtv.pdf

The Royal Academy of Engineering (2009), ‘Autonomous Systems: Social, Legal and Ethical Issues’, accessed: 15.02.2014, available at: http://www.raeng.org.uk/societygov/engineeringethics/pdf/Autonomous_Systems_Report_09.pdf

Zagler, Panek and Rauhala, (2008), ‘Ambient Assisted Living Systems – The Conflicts between Technology, Acceptance, Ethics and Privacy’, Vienna University of Technology, Dagstuhl Seminar Proceedings 07462, Assisted Living Systems – Models, Architectures and Engineering Approaches

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s