People often make predictable passwords. They use common methods to create passwords such as common words, substituting characters or using patterns on the keyboard. Blase Ur from the University of Chicago and a team from Carnegie Mellon University developed and evaluated a password security meter that provides a more accurate rating of password strength and provides usable feedback. Password policies dictate characteristics for a ‘good’ passwords. Although these policies can be good for security people can find the complex composition requirements unusable. Password meters provide a feedback to people when creating passwords based on heuristics. These can suffer problems due to having to transfer the prospective password to the server multiple times or by offering a poor assessment of password strength. To overcome these issues the researchers developed a system that combined machine-learning to assess the predicability of the password with a heuristic-based system that allowed for the provision of specific feedback; such as suggested improvements.
They tested this system by recruiting a total of 4,500 American adult participants for a study over mechanical turk (the Amazon Crowd-sourcing service). Participants created passwords and were provided with different presentations of feedback. Two different classes of policy were enforced, one more stringent than the other. Participants were required to remember the password, entering again after completing a survey, and then again after two days. About half of the participants committed the password to memory without using an aid, such as a book or password manager. Tests from these people were used to assess affects on password memorability. The more stringent password policy produced stronger passwords but people were less likely to remember them. The provision of feedback resulted in participants using longer passwords. The feedback also resulted in participants changing their decisions regarding the passwords, as they were more likely to delete characters and re-enter passwords. More feedback made participants feel that the process of creating a password was annoying but they also agreed that it made their passwords stronger. The more stringent policy didn’t affect how long it took for participants to create passwords but they did enjoy the process of creating it less. Participants felt that the use of a coloured password meter help them create a stronger password but removing it where there was text feedback didn’t appear to have an effect on the strength of their passwords.
This study some useful insights for improving password security.
The authors recommend the use of text feedback for less stringent password policies as the provision of feedback was very helpful. Providing a coloured strength indication bar helped make the process more enjoyable for people.
The meter has been made open source to help encourage the use of password meters at https://github.com/cupslab/password_meter
There is a lot more detail in this paper and a number of insights to help those working on the development of authentication approaches, UX and policy.
You can see Blase Ur present this paper and field questions at the CHI ’17: ACM CHI Conference on Human Factors in Computing Systems on youtube (<19minutes).
cite:
Ur, B., Alfieri, F., Aung, M., Bauer, L., Christin, N., Colnago, J., … & Johnson, N. (2017, May). Design and evaluation of a data-driven password meter. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 3775-3786). ACM.
source:
https://dl.acm.org/citation.cfm?doid=3025453.3026050