BASEL, Switzerland (AN) — Machine learning applications that use big data are increasingly being turned to by central banks to help conduct research and make decisions about monetary policy and financial stability, according to a new paper on Thursday from the Bank for International Settlements.
With the growth and collection of larger and more complex data sets, the uses of machine learning — a subfield of artificial intelligence that uses applications to extract knowledge from data — are proliferating to power everything from virtual personal assistants and video surveillance to social media and online fraud detection.
About 80% of the world's 250 central banks, governments and international official institutions have been formally discussing big data for uses such as economic research, monetary policy, financial stability, supervision and regulation. That is up from 30% in 2015, according to the 26-page BIS paper.
The Basel, Switzerland-based BIS, which was established in 1930 and aims to promote international cooperation while acting as a bank for central banks, counts 63 central banks and monetary authorities as members, accounting for about 95% of global GDP.
"Rising interest is reflected in the number of central bank speeches that mention big data and do so in an increasingly positive light," the paper said. "And yet, big data and machine learning pose challenges — some of them more general, others specific to central banks and supervisory authorities."
Privacy preferred 'if given a choice'
Among central banks and monetary authorities, for example, there has been legal uncertainty around data privacy, confidentiality, data quality, sampling and representativeness. Some face constraints in setting up IT infrastruture and staff to run it. As a result, the BIS paper recommends more cooperation among public authorities so central banks can better collect, store and analyze big data.
"Citizens might feel uncomfortable with the idea that central banks, in particular, and governments and large corporations, in general, are scrutinizing their search histories, social media postings or listings on market platforms. While these concerns are not new, the amount of data produced in a mostly unregulated environment makes them more urgent," the paper said. "Fundamentally, these considerations indicate that citizens value their privacy and might be unwilling to share their data if given a choice."
Another issue is “algorithmic fairness,” which refers to preclassified data sets that may be subject to biases around factors such as gender and ethnicity. Some of these revolve around inputs based on the subjectivity of people towards words not entirely positive or negative.
Two years ago, human rights organizations and technology groups released a proposed “Toronto Declaration,” also endorsed by Human Rights Watch and Wikimedia Foundation, that called for international human rights standards and data ethics to be applied to the development and use of systems that rely on machine learning. It urged governments and tech companies to prevent machine learning systems — such as self-driving cars and translation software — from violating international human rights laws.