Skip to main content

Verified by Psychology Today

Memory

The Mental Life of a Bank Loan Algorithm: A True Story

If my bank algorithm can't recognize me, is it because I'm not acting like me?

Algorithms are extremely clever at accurately processing huge quantities of information. But a recent problem with a bank loan is a disturbing look behind the curtain.

Last week I applied for a personal loan. My bank had told me I could apply for the loan online. So I did through their website.

I filled out the information, gave it my personal details, and pressed submit.

When I woke up the next morning the bank had approved the loan and informed me they would now transfer the money to my account. Hooray.

Except, the rate was 7.9%. About 5% higher than I was expecting. The bank had said it could be as low as 3% (as advertised on their website). This was of course based on a credit check. To my knowledge, my credit is good. So why 7.9%?

I suspected this was an issue with my credit. So I called the bank to find out.

I was told that the application simply submits the form to an algorithm and they get back a rate.

"Do you know why it's that rate?"

"Sorry sir, I don't."

"Is it my credit?"

"We don't get any information about your credit."

"Can you look?"

"No, we don't have access to that information. It's all in the algorithm."

"So you just submit the information to an algorithm and it says what rate I get?"

"Yes."

I asked them to cancel the loan.

Then I went online to another website and submitted another application. The website returned my approval instantly and said the rate was 10.9%. Yikes.

I resigned myself to return to my original bank and accept their unexpectedly high rate.

I called my bank back and asked them if I could uncancel the canceled loan.

"No, but you can reapply with me on the phone."

"Ok," I said, "Let's do it."

She took my information again. "So what's your title," she said. "In your online application, you put 'Professor'. But your bank account says 'Dr.'" Note that all my bank cards from this same bank have different names on them, so this isn't trivial.

"It doesn't matter."

"Well, you have to pick one."

"Dr., I guess, so it won't confuse anything."

She took the rest of my information, the same as I did online, and submitted my application to the algorithm. A second later she said, "It's been approved at 3%. We'll send you the paperwork in the mail."

Notice that the apparent error here was fairly simple. The bank algorithm didn't recognize me as a member of the bank, so I didn't get the bank member rate. It failed to recognize me because I used a different title.

I had chosen 'Professor' online because that's what I was, but my bank account said 'Dr.', which is also what I was. More specifically, Dr. is exactly what I was when I signed up with the bank. If you're me, the difference in titles is completely irrelevant to anything. If the bank didn't ask for one, I wouldn't put one.

Algorithms that don't have any semantics dedicated to understanding what words mean have no way of knowing how important a title is. I suppose if the algorithm had any kind of understanding of the information it was dealing with it would have said to itself, "99.9% of everything this applicant says about themselves is the same as this individual who has an account with us. Most importantly, this individual claims they have an account with us and they claim they have an account number. Indeed, they want the money sent into this other guy's bank account. Maybe they're the same guy!"

But the algorithm didn't do this. It equated a difference in title as a complete difference. Might it do the same if the number of my dependents changed? Or if my partner changed? What about people who change their last name, their gender, or sometimes but not always include suffixes like Junior? What if I recently moved or input my address incorrectly? What is the potential impact of a typo? In this case, the error was worth about $100 per $1000 borrowed.

Note that the algorithm, by not exposing its mistake to the person I spoke with on the phone, made that person stupider by forcing them to ask me for a title that would misidentify me, even though she already knew who I was.

It is easy to say that it was my mistake. Obviously, it was my mistake. On the face of it, I am the only person who could have acted differently to change the outcome.

One might argue it was the algorithm's programmers who made the mistake by making the algorithm incapable of seeing what any human could plainly see. But this is expecting too much from an algorithm. Should the algorithm be able to make a subjective judgment about whether two potentially different people are the same? What if they aren't?

Perhaps the algorithm should be able to smell that something is fishy and ask some more questions. Clearly, we're not there yet.

The problem really isn't who is at fault. The real problem is how are we going to teach ourselves about the shortcomings of algorithms. In a previous article, I wrote about the mental health of algorithms. The antics of this bank loan algorithm add a humorous footnote to that article.

It reminds me of the man called Shereshevsky in Luria's book, "The Mind of a Mnemonist". Shereshevsky had a near perfect memory. He could remember numbers and names and what was happening on specific days of his life. But because of his memory, he also had trouble with identifying people. He remembered everything so well that if he met a person who had a different expression, they became a new person. A simple change in expression was enough to throw Shereshevsky off. Shereshevsky could not generalize.

The bank algorithm also could not generalize. It was blind to something that most people would consider obvious. Yet, this algorithm could generalize in another instance. The algorithm is explicitly programmed to take in my information and to generalize that information to the loan repayments of other individuals like me, so it could figure out my level of risk based on the risks the bank had taken on other people like me.

Because the algorithm could not generalize my identity to other instances of my identity, it effectively treated me like two different people. It was a computational Shereshevsky, treating a man who smiles and later frowns as different individuals.

My situation is pretty benign, but I doubt that is always the case. Algorithms make judgments about insurance, medical treatment, and whether a sign is a stop sign or a 50 mph sign. It is almost certain that military robots will misidentify friendlies from targets and they probably already have. Algorithms are being placed in increasingly important roles, where not only money but life and health are on the line.

In many instances, algorithms are less error-prone than humans. But the errors are not just less, they are different. And that is likely to change the kind of collateral damage we should expect from using them. Worse still, it is extremely difficult to argue with an algorithm because few people know exactly what the algorithm is doing. Maybe least of all, the algorithm itself! In some cases, there may not be a single individual who understands the full workings of the algorithm. Then it becomes difficult to change, to improve, and to detect exactly where it's going wrong.

Thomas Hills on Twitter

advertisement
More from Thomas Hills Ph.D.
More from Psychology Today
More from Thomas Hills Ph.D.
More from Psychology Today