During the early 1990s, I was visiting with the president of a top-ranked graduate management institute in India. The institute was so popular among students entering college that it required every applicant to sit for an admissions test. The test typically included at least one essay. The institute’s president told me that one year he asked the applicants to write an essay on the statement, “There is no right way to do a wrong thing.” The essays submitted were too brief and incoherent. It took him much less time to read them, but he was disappointed in the outcome.
I believe this is almost always true of everyone, not just students. We know what the mantra is, but we do not know how it enters our general behavior. We understand the concept, but we cannot seem to apply it well enough to make it second nature in us. For example, on the reality TV show Shark Tank, one budding innovator showcased her product, a mirror called Skinny Mirror that made people look a few sizes slimmer than they actually are in reality.1 The idea is to boost the mirror users’ self-confidence at the cost of lying to them, perhaps in the hope that they will strive to improve after looking at what they could look like. But lying is lying, regardless of its form. As a result, the “sharks” unanimously rejected the product idea as repulsive. Thus, we might cross the line not because we do not know what or where the line is, but because we just cannot incorporate it into our daily duties well enough.
Means-ends Relationships
Means do not justify ends. Something that is wrong is wrong, regardless of the path followed to get there. However, I am particularly puzzled by the fact that technology is often victimized to arrive at the wrong ends. The only difference across a timeline of decades is that, in the past, the impact was limited to the organization and its immediate stakeholders; now, the scalability of technology delivers the outcome in a massively pervasive fashion. Despite all that technology brings to improve lives and the living environment, it just cannot seem to shield itself from creative deployment for the wrong ends.
The underlying human element is the culprit, although at first sight it seems like technology is the defendant. Take, for example, the most recent crisis at Volkswagen (VW). While the details are sketchy at this stage, it appears that senior engineers at VW embedded a code in the company’s emission software to manipulate emission results. The code became active when the emission levels were measured; at all other times, the emissions were in violation of the benchmark requirements. As a result, the company, which is the backbone of the German economy, is now in deep trouble. The VW incident has left many people wondering if they can trust any business to do the right thing. And yet, this is not the only case; these days, there are many incursions into moral misbehaviors and to be accomplished, several of them rely on technology.
To a degree, IT permits anonymity (along with the scale), which, in turn, may invoke indulgence to moral temptations. Ashley Madison is a graphic example of this. Empowered by IT, the entire business model rested upon the idea that if individuals wished to indulge, the site would both facilitate it and help them hide it. The eternal temptation is vividly described in the company’s logo: “Life is short. Have an affair.” The business model here is upside-down. I wonder if a chief executive can set the right tone when his company’s business objectives are improper, if not illegal. The virtualization of an extramarital affair does not make it right. However, in providing the service, Ashley Madison’s success lies in human nature, which may be inclined toward believing that having worldly fun depends on indulging in temptations. And the company has succeeded in leveraging the magnetic force of temptations. The proof: Ashley Madison had, at the time of its public discovery, more than 42 million subscribers.
Some new apps focus on alleviating the pains of finding a parking spot in crowded metropolitan areas (see, for example, Streetline.com). Parking spaces are a public asset and cannot be held hostage or temporarily “owned” by a computer app that detects an open space. But cities are slow to react and do not have the proper code to regulate such practices. As a result, companies selling parking opportunities establish a foothold in such cities. One might argue that there is nothing wrong with this; no existing regulation was violated. However, in reality, the company selling the privileges appears to virtually own the parking spots, a public asset that presumably should not be tied up or allocated to some preferred individuals. Helping people find a parking spot in a crowded city block is a good thing. However, on a larger scale, it appears that IT is helping encroach upon the spirit of the public ownership and right of use.
Also in the transportation arena, Uber has taken on the fight in various countries to legitimize its business in the face of current, established regulations and licensing requirements. The battle being fought in France is particularly noteworthy, where Uber argues that the country’s regulations are confounding its prosperity and must be changed. Uber’s lawyer, Hugues Calvet, argues, “The current legal framework doesn’t correspond at all to new digital models.” The plaintiff’s lawyer, Maxime de Guillenchmidt, claims, “Uber’s argument is intellectually dishonest.”2 Uber’s technology helps improve efficiency and makes life more comfortable, but should it act like a regulator of its own industry? Should it get to the ends first and then legitimize the means? Now that the “Uberization” of businesses and even industries is on the rise, the question of intellectual dishonesty may pervade much of the service economy.
Technology As a Means
While innovative ways to deploy technology as a means to an end (wrong or right) are emerging, the traditional ways to do so still remain strong. For example, the ever-growing practice of social engineering has a long and painful history and yet is still problematic. Senior citizens are cheated out of their life’s savings. Even concert tickets purchased online are snatched away by looters electronically. As an enabler, technology ends up carrying the blame, but the reality is that it only feeds the temptation, perhaps with greater vigor. Resisting temptation is not up to technology.
The “conversion” of traditional crime (not involving technology) into cybercrime is striking. An ordinary fraud takes the shape of online fraud (e.g., auction fraud, advance fee fraud, phishing), burglary and malicious damage converts to online abuse (hacking, denial of service, viruses), child sex offenses appear as online child grooming or child pornography web sites, money laundering dresses up as online payment systems (through eCurrency, for example), ordinary theft is now more polished (e.g., identity theft; movie, music and software piracy), and stalking goes underground (cyberstalking, cyberbullying).3 What used to be is still what it is; only the variety, impact and remoteness enable myriad scenarios, all with the same underlying human frailty hiding behind technology.
There are many ways in which people can do the wrong thing; technology is just one lever they can use. In using technology in the corporate arena, it appears that the actor—executives or the organization—has little remorse. At times, it may be that the actor would not have done the wrong thing if technology was not present as a collaborator in the act. It is strange that IT seems to be an acceptable medium to craft otherwise unacceptable, or at least suspicious, schemes.
Why Technology?
Why is IT such a source of comfort to businesses and individuals in doing the wrong thing? In a recent study on the use of digital signatures, it was found that people who sign on a piece of paper are more honest than those who sign using a digital signature; in fact, the dishonesty levels of the digital signers exceed the levels of those who do not sign the document at all.4 This does not bode well in light of the fact that by 2017, the number of e-signature transactions will exceed 700 million.5 According to the study on honesty of digital signers, the reason people exhibit higher levels of dishonesty when they append a digital signature is a weak association between the signature as a commitment and their self-presence in appending the signature, i.e., how much of themselves was present in the signature they provided. The study’s findings show that the higher the score on self-presence, the greater the likelihood of honesty with regard to the commitment conveyed by the signature.
In a discussion of a study about dishonesty in golf settings, Dan Ariely notes the idea of psychological remoteness from the actual act causing people to indulge in dishonesty.6 The study’s findings showed that dishonesty in golf is directly influenced by the psychological distance from the action. According to Ariely, cheating becomes much simpler when there are more steps between the individual and the dishonest act. For example, if there is a desire to improve the unfortunate location of the ball, it is easy to rationalize moving the ball with a club and harder to do so when moving the ball by kicking it; the hardest situation is when the ball is picked up by hand and moved to another spot.
While these examples do not involve IT, the idea of remoteness from the act appears to be relevant. Technology in the Internet world causes people to be away from actually experiencing the act firsthand and that may lead to an inclination to do the wrong thing.
Another factor that seems to play a rather strong role is loopholes in the law. Laws and regulations may determine the lower levels of thresholds in moral behavior, but they are, nevertheless, important in motivating people to do the right thing. Since laws often do not keep pace with advances in technology and IT-leveraged business models, there could be a gaping hole where the act may be morally wrong, but legally compliant. The weak power of the law in the face of leapfrogging technology causes businesses to act first and worry about the regulations later. The battle in the fantasy football industry is a vivid example of such crises: regulators argue that the act is, indeed, gambling, while the industry’s rebuttal hinges on fantasy football involving conscious, decision-making acts by the subscriber (player).7 It is likely that the same kind of ethical puzzles will arise when drone usage rises to a visible level in the economy and driverless cars become part of daily life.
Endnotes
1 Shark Tank, 22 October 2015 episode, http://abc.go.com/shows/shark-tank/video/PL5539712/VDKA0_qjquiu9x
2 Schechner, S.; “Uber Accuses French Government of Trampling on the Sharing Economy,” The Wall Street Journal, 15 September 2015, www.wsj.com/articles/uber-accuses-french-government-of-trampling-on-the-sharing-economy-1442318187
3 Australian Crime Commission, “Cyber and Technology Enabled Crime,” July 2013, http://www.crimecommission.gov.au/publications/intelligence-products/crime-profile-fact-sheets/cyber-and-technology-enabled-crime
4 Chou, E.Y.; “What’s in a Name? The Toll E-signatures Take on Individual Honesty,” Journal of Experimental Social Psychology, vol. 61, November 2015, p. 84-95
5 Anand, P.; “The Lies E-Signatures Tell,” The Wall Street Journal, 14 October 2015, www.wsj.com/articles/the-lies-e-signatures-tell-1444788405
6 Ariely, D.; The Honest Truth About Dishonesty: How We Lie to Everyone—Especially Ourselves, Harper Perennial, USA, 2013
7 iPR Newswire, “New York Seeks End To Fantasy Gaming,” 18 November 2015
Vasant Raval, DBA, CISA, ACMA, is a professor of accountancy at Creighton University (Omaha, Nebraska, USA). The coauthor of two books on information systems and security, his areas of teaching and research interest include information security and corporate governance. Opinions expressed in this column are his own and not those of Creighton University. He can be reached at vraval@creighton.edu.