'No' to ban on killer robots
Lethal Autonomous Weapon Systems (LAWS) should be regulated rather than banned says Wadham Fellow Tom Simpson in a new Policy Memo for the Blavatnik School of Government.
Date Published: 12.07.2024
Wadham's Tutorial Fellow in Management, Shumiao Ouyang, addresses whether Chat-GPT can make you rich, why payment apps are bigger in China, and more.
We caught up with one of Wadham's newest Fellows, Shumiao Ouyang to discuss his wide-ranging research at the intersection of finance and tech. The write-up below is a condensed form of an hour-long discussion covering AI, digital payment apps, consumers' data privacy concerns, and more.
Also available in podcast form is a snippet from the conversation that expands on the impact of mobile payment apps, and joyfully explores related tangents. Find out whether Wadham's Comms assistant is irrational for still sticking with a good ol' debit card!
I’m an economist but I'm quite curious about many things, like biology, computer science, and AI. I’ve been attending seminars related to Large Language Models like Chat-GPT quite frequently in Oxford. I just presented a paper at one!
Mine was about my recent research on the risk preferences of Large Language Models. We examined 30 of them, both open source and closed source, to see how they handle risk. We found that a given model usually has a consistent approach to risk. For instance, models from Mistral, a French AI company, are consistently risk-seeking, whereas Chat-GPT is risk-neutral. The AI models on the market have a wide range of risk preferences – some risk-prone, some neutral, some adverse.
In fact, you can ask the AI what its approach to risk is and it will give you an answer. It’s like when you ask an investor what their tolerance for risk is. And we found that the answers you get back are generally accurate. If you give an AI a choice between risky investment options like a lottery, and something safer, its behaviour matches what it says about itself. An AI that claims to be risk-prone will be more open to the lottery.
Aligning AI with human values is super important and the computer science literature focuses on three in particular: harmlessness, honesty, and helpfulness. But we found that when you adjust an AI model to score better along one of these dimensions, it becomes more risk-adverse in general.
That’s not necessarily a bad thing but it is something to be aware of. Think about the use of AI in investment decisions. You might think you can make an AI more ethical without changing its basic investment preferences. But currently that’s not the case. The AI will become more risk-adverse even when the investment options have no obvious ethical impact. That’s something you should be aware of if you are using these models.
It depends on how financially literate you already are! We researched this and found that AI models can help people make better decisions, but mostly for the more skilled individuals.
These AI models don’t provide direct advice about what you should invest in. They mostly just provide information and help you find investment opportunities that match your preferences. It takes financial literacy to parse this information but for those who have the literacy, it’s very useful!
Many of the countries that were predominantly cash-based for a long time, like China and India, are shifting to mobile app payment at a fast pace. That’s probably because the transition from cash to mobile payment is a big improvement, so people have been very willing to embrace these apps.
By contrast, in the UK or US, we’ve been using the debit/credit card system for many, many years. The benefits of switching from that system to the mobile payment system are more marginal, so there is less incentive to make the change.
Yes.
And to say more about the benefits of transitioning away from cash to digital payment, one benefit I’ve studied is increased financial inclusion. When less wealthy or less educated individuals increase their digital spending, they are more able to access credit. We found that if you increase your digital payment amount by 1%, your credit line will increase by about 0.41%. That’s quite big.
Credit lenders want to make sure that you’ll be able to pay them back, so they want information about you and your finances. If you are quite wealthy, you might have lots of ways to indicate your credit-worthiness, like your income statement, education and so on. But if you come from an underprivileged background, it's quite hard for you to prove that you are creditworthy, even if you are. Paying digitally helps because it creates a record of your transaction and consumption patterns. Lenders can use that data to infer your credit-worthiness.
Yes, the downside of the greater financial inclusion is that these digital providers acquire a tremendous amount of data about you. And a lot of people worry about how this data will be used.
This is sometimes called the ‘privacy paradox’. It comes from trying to understand two different data-sets. You look at surveys that ask people, “are you concerned about privacy?” And people say they are very concerned. Then you look at another data-set, which shows behaviour, and you see that people are nonetheless giving away a lot of data.
When I researched this, we gathered both kinds of data from the same people. It turned out that the people who voiced the most concern about privacy in our survey also shared the most data! The main driver for the uptick in both concern and data sharing seemed to be the demand for the digital services.
Basically, for two individuals, they might have similar privacy concerns at the beginning because they didn't share a lot of information at the beginning. But one has more demand for digital services than the other. They use the services more, and in doing so, they share more information. Having shared more information, they become more concerned.
That’s right.
AI is influencing everything now and will continue to do so. Currently, we’re still regarding AI as a tool. But soon I think we are going to treat AIs as agents or partners, which can help us to make a lot of decisions, including financial ones.
This will extend beyond the investment advice we spoke about earlier. There could be consumer-facing AI products that shop for you. The AI could hunt around different websites for the product you want, find the best deal, and purchase it for you.
In some respects, this is already the case. Do you remember the brand of the last thing you bought on Amazon? Shopping on that platform is already mediated by algorithms that work on recommendations etc.
There's a concept in Chinese culture of 'the middle' also known as the 'Way of Zhongyong' or 'Golden mean'. It's like when you are good at something but not super famous. Or you're not really on the extreme. Wadham is like that. It's well-rounded in all dimensions. E.g. we're not the oldest College but we still have 400 years of history.
I like how casual and open Wadham is. We're not standing out for one specific thing but we're a warm, comfortable place.
Many thanks to Shumiao for his time.
Lethal Autonomous Weapon Systems (LAWS) should be regulated rather than banned says Wadham Fellow Tom Simpson in a new Policy Memo for the Blavatnik School of Government.
A Wadham Honorary Fellow has been honoured with an OBE for services to artificial intelligence research.