Two-factor authentication will not save you
By ABHINAYA KASAGANI — akasagani@ucdavis.edu
I was greeted upon my return to campus this winter by being ousted from my very own Canvas, and with it, my student email, Oasis and Schedule Builder. 404 Access Denied. I exercised patience and restraint — yes, I have entered my one-time password (OTP) for the hundredth time; Yes, this is really my device; What do you mean I cannot regain access for the foreseeable future? Whatever, I give up!
Two-factor authentication (2FA) has, in recent years, become a standard measure to bolster security online. Requiring users to authenticate their identity with a password, app or device — something only they have access to — promises added protection against scams and fraud. Despite its widespread adoption, 2FA remains an incomplete solution to a much larger concern within tech infrastructure — the security vs. innovation paradox. By placing the onus of security onto individual users without addressing the deeper systemic flaws in tech infrastructure, the system entrusts its users with the burden of safety and introduces new ethical concerns in the context of sensitive data like biometric information.
The concept of security fatigue is fundamental to one’s understanding of cybersecurity. Essentially, as individuals become more inundated with security measures, they experience weariness or reluctance to engage with them. People can become burdened by security alerts, password change requests or warnings, causing them to greet this bombardment with ambivalence. They might also give into multi-factor authentication systems — which use an authentication request that users must “accept” on their device — that can be easily exploited by attackers.
As businesses attempt to reconcile the need for robust security measures through innovation, they often find it difficult to avoid compromising one for the other. An example of this trade off occurs when users seek intuitive systems — ones that are easy to access and quick to use — which tend to have less stringent security protocols.
The University of California’s adoption of Duo is a prime example of how 2FA, while offering additional security, is not a “one size fits all.” The design of 2FA assumes that its users have access to modern technology or reliable internet — mind you, these are systems everyone is reliant on — which inadvertently frustrates and alienates users who are unable to use these tools effectively. Duo, which works by sending a one-time password to the user’s phone or requiring a code from its app, may be effective against certain attacks but does nothing to address issues such as insecure passwords, poorly designed databases and outdated software. This can be particularly worrisome in environments where sensitive data is at stake.
Additionally, one of the biggest ethical concerns with 2FA is its increasing use of biometric data — fingerprints or facial recognition — to verify its users. Despite these methods promising the user more enhanced security, they come with privacy risks that users might not fully comprehend.
The centralized storage of biometric data increases the risk of breaches and systemic misuse (even with encryption); Programs like Duo that collect confidential data raise ethical concerns regarding surveillance under the guise of security. Biometric data is highly sensitive and irreversible — unlike passwords, it cannot be changed if compromised. The more ubiquitous these methods become, the more likely it is that our personal data will be mishandled and exposed. While companies claim to protect this data through encryption, breaches cannot be accounted for in advance, and users are left vulnerable and with all of their data in one basket.
Research shows that even with advanced security protocols in place, human error — whether that be through poor password practices, mishandling sensitive information or failing to recognize phishing attempts — remains a primary reason why achieving security is difficult despite technological advancements. The broader concept of the security dilemma, wherein the measures taken to increase security may inadvertently introduce new risks or expose weaknesses, suggests that as security measures grow more complex, user mistakes will increase. So, even the most sophisticated tools may not be enough to prevent breaches.
As one evaluates the usability of 2FA, it becomes clear that this misalignment between security and design perpetuates a false sense of security, leaving its users to fend for themselves. There is a much-needed shift in how to approach security in order to create a genuinely secure digital environment that does not rely on users’ vigilance to keep it safe.
The focus needs to be on accommodating both usability — making it so that users are able to choose the right security action, limiting the number of security decisions they need to make — and security, ensuring more secure software architectures and privacy-conscious data handling. Until then, the cycle continues and the paradox persists.
Written by: Abhinaya Kasagani— akasagani@ucdavis.edu
Disclaimer: The views and opinions expressed by individual columnists belong to the columnists alone and do not necessarily indicate the views and opinions held by The California Aggie.