Industry insights

Five ethical decisions cybersecurity pros face: What would you do?

Susan Morrow
July 11, 2022 by
Susan Morrow

Back in the 1990s, when I first entered the world of information security, I was creating encryption software. The UK government wanted to know the ins and outs of encryption algorithms used in commercial product development. As a “matter of national security,” government officials visited our company and asked us, as encryption vendors, to give them a back door into our application in case of a “national emergency” — essentially, a “break glass in case of emergency” access code.

We felt this presented an ethical dilemma: We wanted to give robust security to our enterprise customers to protect their proprietary information and intellectual property but having a back door was potentially a way to circumvent this protection. How could we, in all honesty, present this solution as a robust encryption solution when we knew that it had a backdoor that could slip into the wrong hands? 

Ethics in software is a complex area, and the encryption dilemma I experienced in the 1990s has continued to haunt security vendors. Here is a short exploration of five other examples of issues we face as security professionals and/or security app designers and developers.

ChatGPT training built for everyone

ChatGPT training built for everyone

We've created a training video and supplemental resources to educate every employee on how to use AI tools securely. Meet with a member of our team to get started.

1. Walking the tightrope of making money and calling out bad tech

Artificial intelligence (AI) is a technology that has ridden the wave of tech futures for several years now, offering the promise of unique innovations. Companies that build AI apps are pulling in large amounts of investment money. Potential revenue is eye-opening: Statistica researchers predict around $126 billion in revenue from AI-enabled apps by 2025. But AI is still an emerging technology for certain predictive qualities. I

n 2020, Timnit Gebru, a researcher in AI and ethics at Google, was allegedly sacked for highlighting a lack of diversity in AI research and bias in AI facial recognition: Gebru has expertise in the high error rates of AI when analyzing the faces of women with darker skin tones. Gebru felt an ethical imperative to ensure that technology used to identify people was fit-for-purpose for all, not just white faces. Her decision put the investment and stock prices of Google at risk, and her resultant dismissal is blamed on this factor. This type of conflict of interest is behind many ethical decisions IT professionals make every day.

If you were in Gebru’s position, would you have called out Google, even if it meant losing a dream job?

2. Sacked for telling the security truth

Security professionals are often on the firing line of security risk, even losing their jobs because of breaches. For example, San Francisco State University suffered a major data breach in 2014, exposing many student records. An information security officer at the university, Mignon Hoffman, was sacked after the breach. Hoffman alleged that the university sacked her for disclosing the security failings of the university — in other words, she chose to become a whistleblower as the most ethical thing to do. As a result, Hoffman sued the university for "wrongful termination and whistleblower retaliation."

Hoffman felt that telling the truth was an important ethical decision. However, she paid the price by losing her job. Whistleblowing is protected by law in the U.S. The decision to blow the whistle on bad practices can be complex and contextual, but some security issues are just too big to ignore. Such was the case when Edward Snowden blew the whistle on the surveillance activities of the NSA back in 2013. Snowden paid a heavy price, having to leave his home country. He said in the first documents he provided, "I understand that I will be made to suffer for my actions," but "I will be satisfied if the federation of secret law, unequal pardon and irresistible executive powers that rule the world that I love are revealed even for an instant."

Sometimes an ethical decision is a no-brainer. But would you have given up your home to highlight state wrongdoings?

3. To tell or not to tell, that is the question

Some of the jobs in information security can place an employee in an ethical dilemma room with multiple doorways. The door you choose to open may embarrass your employer or result in your prosecution. A recent example of this ethical dilemma was when a security firm employee hacked into a client's computer at Middlebury College, discovering a hoard of images and videos showing child sexual abuse. The hacker was part of a penetration test company looking for vulnerabilities in the client's network. The ethical hacker was not supposed to be trawling for information on personal computers and was concerned about being prosecuted for doing so. However, the pentester decided that this discovery was too important not to disclose. The pentester first connected to the offensive computer containing the material before handing it over to law enforcement to investigate.

Security professionals have a duty of confidentiality to clients, often expressed in confidentiality agreements signed by pentesters. This confidentiality must be taken seriously because pentesters may have access to privileged and sensitive information. This fact leads to a meaningful conversation about the ethical conduct of ethical hackers. However, some discoveries, especially those that could cause harm to others, should come outside of this remit.

In this case, protecting children made the pentester’s decision easy. I imagine most readers would agree.

Phishing simulations & training

Phishing simulations & training

Build the knowledge and skills to stay cyber secure at work and home with 2,000+ security awareness resources. Unlock the right subscription plan for you.

4. Zero-days for sale

Zero-days were used in 66% of malware attacks in Q3 2021 because they are open doors not yet closed by security patches. In other words, a cybercriminal who knows about a zero-day can exploit it to their heart's content. Consequently, finding a zero-day exploit can be lucrative for the person locating the flaw if a company has a bug bounty program. These bug bounty programs can offer thousands of dollars for anyone who can identify a zero-day exploit. For example, a Zoom zero-day recently sold for $500,000. 

Unfortunately, not all zero-day marketplaces are legitimate and controlled by a vendor. A grayer, fuzzier marketplace for zero-day sales exists, one populated by nation-states, the Italian Hacking Group, Israeli NSO Group, and others. These zero-days can end up in the hands of cybercriminals and state-sponsored hacking groups: This was the case for a group of Al Jazeera journalists, hacked by an NSO group that acquired the zero-day.

Security researchers who find important zero-days, such as iOS zero-days, could end up in an ethical dilemma when they are offered a large sum of money, perhaps millions of dollars, for their find. They may naively believe they are selling to a legitimate group, but the state and state-sponsored lines can be fuzzy.

Many of us would cave at the thought of getting a massive payday for a zero-day. The question is: Would you do it if you knew it could end up potentially causing patient risk in a healthcare setting?

5. Harm to privacy and dark patterns

Technology increasingly shapes how we live and interact with others online. Our data is our proxy in this digital world. Privacy and app design can seriously impact this digital life. Dark patterns are a type of UI/UX design method that encourages users to perform actions that are beneficial to the application instead of themselves. You can almost think of them as a form of corporate phishing as they often involve taking more personal data than is needed to perform the task.

Deceptive Design is a site dedicated to calling out unethical dark patterns of app and web design. The site describes the practice of “privacy zuckering,” where a site or app tricks a user into publicly sharing more personal information than was intended. Dark patterns are not illegal, but they are unethical. They are often beneficial to an organization because they take people down a pathway that encourages purchases or provides data to ad tech companies. Facebook is renowned for unethical dark pattern practice and poor privacy – the dark patterns behind the Facebook/Cambridge Analytica debacle being a poster child for privacy harm.

Security and privacy professionals involved in designing systems and apps that utilize personal data may be under pressure to use dark patterns. In this situation, it can be challenging to be ethical and resist the pressure to build user journeys that end in privacy harm.  

Conclusion: It’s a question of ethics

Technology is not just about having the latest gadget: Tech is deeply integrated into our lives. Social media and other online interactions can have a real-life impact on our society's governance. Technology has the potential to create a more inclusive society or, conversely, to divide and conquer. Technologies can magnify inequalities when poorly designed or the harm they could inflict is not considered. Cybersecurity, privacy and digital identity are at the intersection of ethical software design, development and use.

When security professionals engage in pentesting or hardening security systems or have privileged access to perform their roles, they may encounter the types of ethical dilemmas mentioned above. But understanding the complex nuances behind ethical decisions is not a black-or-white issue. One person's ethical choice is another's missed opportunity. Because of this, ethical issues must be part of general education in cybersecurity for all security professionals. We are all human, and being ethical is always challenging. Hopefully, our decisions are the right ones. Perhaps cybersecurity professionals should take an oath, as our medical counterparts have, that says we will "do no harm."

ChatGPT training built for everyone

ChatGPT training built for everyone

We've created a training video and supplemental resources to educate every employee on how to use AI tools securely. Meet with a member of our team to get started.

Sources:

Susan Morrow
Susan Morrow

Susan Morrow is a cybersecurity and digital identity expert with over 20 years of experience. Before moving into the tech sector, she was an analytical chemist working in environmental and pharmaceutical analysis. Currently, Susan is Head of R&D at UK-based Avoco Secure.

Susan’s expertise includes usability, accessibility and data privacy within a consumer digital transaction context. She was named a 2020 Most Influential Women in UK Tech by Computer Weekly and shortlisted by WeAreTechWomen as a Top 100 Women in Tech. Susan is on the advisory board of Surfshark and Think Digital Partners, and regularly writes on identity and security for CSO Online and Infosec Resources. Her mantra is to ensure human beings control technology, not the other way around.