We could not find any results for:
Make sure your spelling is correct or try broadening your search.
Share Name | Share Symbol | Market | Type | Share ISIN | Share Description |
---|---|---|---|---|---|
St James House Plc | LSE:SJH | London | Ordinary Share | GB00BHXM9D70 | ORD 1P |
Price Change | % Change | Share Price | Bid Price | Offer Price | High Price | Low Price | Open Price | Shares Traded | Last Trade | |
---|---|---|---|---|---|---|---|---|---|---|
0.00 | 0.00% | 41.50 | 38.00 | 45.00 | 0.00 | 00:00:00 |
Industry Sector | Turnover | Profit | EPS - Basic | PE Ratio | Market Cap |
---|---|---|---|---|---|
0 | 0 | N/A | 0 |
TIDMTNT
Tintra PLC
28 June 2022
28 June 2022
TINTRA PLC
("Tintra", the "Group" or the "Company")
Recent Press Articles
"AI in banking needs to be 'explainable'"
Richard Shearer, Group CEO, recently authored an article on "AI in banking needs to be 'explainable'", which was published in Finance Derivative is a global finance and business analysis magazine, published by FM. Publishing, Netherlands. The article can be viewed online at:
https://www.financederivative.com/ai-in-banking-needs-to-be-explainable/
A full copy of the text can also be found below.
For further information, contact:
Tintra PLC (Communications Head) Hannah Haffield h.haffield@tintra.com Website www.tintra.com 020 3795 0421 Allenby Capital Limited (Nomad, Financial Adviser & Broker) John Depasquale / Nick Harriss / Vivek Bhardwaj 020 3328 5656
Tintra - Comment Piece
AI in banking needs to be 'explainable'
In the world of banking, AI is capable of making decisions free from the errors and prejudices of human workers - but we need to be able to understand and trust those decisions.
This growing recognition of the importance of 'Explainable AI' (XAI) isn't unique to the world of banking, but a principle that animates discussion of AI as a whole.
IT and communications network firm Cisco has recently articulated a need for "ethical, responsible, and explainable AI" to avoid a future built on un-inclusive and flawed insights.
It's easy to envisage this kind of future unfolding, given that - in early February - it was revealed that Google's DeepMind AI is now capable of writing computer programs at a competitive level - and if we can't spot flaws and errors at this stage, a snowball effect of automated, sophisticated, but misguided AI could start to dictate all manner of decisions with worrying consequences.
In some industries, these consequences could be life-or-death. Algorithmic interventions in healthcare, for example, or the AI-based decisions made by driverless cars need to be completely trustworthy - which means we need to be able to understand how such AI arrive at their decisions.
Though banking-related AI may not capture the imagination as vividly as a driverless car turned rogue by its own artificial intelligence, the consequences of opaque, black box approaches are no less concerning - especially in the world of AML, in which biased and faulty decision-making could easily go unnoticed, given the prejudices which already govern that practice.
As such, when AI is used to make finance and banking-related decisions that can have ramifications for individuals, organisations, or even entire markets, its processes need to be transparent.
Explaining 'explainable' AI
To understand the significance of XAI, it's important to define our terms.
According to IBM, XAI is "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms."
These methods are increasingly necessary due to the ever-increasing advancement of AI capabilities.
To those outside the sphere of this technology, it might be assumed that the data scientists and engineers who design and create these algorithms should be able to understand how their AI makes its decisions, but this isn't necessarily the case.
After all, AI is - as a rule - employed to perform and exhibit complex behaviours and operations, and outperforming humans is therefore a sought-after goal, on the one hand, and an insidious risk on the other - hence the need for interpretable, explainable AI.
There are many business cases to be made for the development of XAI, with the Royal Society pointing out that interpretability in AI systems ensures that regulatory standards are being maintained, system vulnerabilities are assessed, and policy requirements are met.
However, the more urgent thread running throughout discussions of XAI is the ethical dimension of understanding AI decisions.
The Royal Society points out that achieving interpretability safeguards systems against bias; PwC names "ethics" as a key advantage of XAI; and Cisco points to the need for ethical and responsible AI in order to address the "inherent biases" that can - if left unchecked - inform insights that we might be tempted to act upon uncritically.
This risk is especially urgent in the world of banking and, for AML, in particular.
Bias - eliminated or enhanced?
Western AML processes still involve a great deal of human involvement - and, crucially, human decision making.
This leaves the field vulnerable to a range of prejudices and biases against people and organisations based in emerging markets.
On the face of it, these biases would appear to be rooted in risk-averse behaviours and calculations - but, in practice, the result is an unsophisticated and sweeping set of punitive hurdles that unfairly inconvenience entire emerging regions.
Obviously, this set of circumstances seems to be begging for AI-based interventions in which prejudiced and flawed human workers are replaced with the speed, efficiency, and neutral coolness of calculation that we tend to associate with artificial intelligence.
However, while we believe this is absolutely the future of AML processes, it's equally clear that AI isn't intrinsically less biased than a human - and, if we ask an algorithm to engage with formidable amounts of data and forge subtle connections to determine the AML risk of a given actor or transaction, we need to be able to trust and verify its decisions.
That, in a nutshell, is why explainable AI is so necessary in AML: we need to ensure that AI resolves, rather than repeats, the issues that currently characterise KYC/AML practices.
There are different ways this can be achieved. The Royal Society proposes two categories: either the development of "AI methods that are inherently interpretable" or, alternatively, "the use of a second approach that examines how the first 'black box' system works."
Transparency and trust
The specific method used to achieve explainable AI in AML isn't as important as the drive to ensure that we don't place all our eggs in a potentially inscrutable basket: any AI we use to eliminate prejudice needs to have trust, confidence, and transparency placed at the heart of its calculations.
If we don't put these qualities first, the 'black box' of incomprehensible algorithms may well continue to put a 'black mark' by the names of innocent organisations whose only crime is to exist in what humans and AI falsely perceive to be the 'wrong place.'
ENDS
Richard Shearer, CEO of Tintra PLC
https://www.tintra.com
This information is provided by Reach, the non-regulatory press release distribution service of RNS, part of the London Stock Exchange. Terms and conditions relating to the use and distribution of this information may apply. For further information, please contact rns@lseg.com or visit www.rns.com.
Reach is a non-regulatory news service. By using this service an issuer is confirming that the information contained within this announcement is of a non-regulatory nature. Reach announcements are identified with an orange label and the word "Reach" in the source column of the News Explorer pages of London Stock Exchange's website so that they are distinguished from the RNS UK regulatory service. Other vendors subscribing for Reach press releases may use a different method to distinguish Reach announcements from UK regulatory news.
RNS may use your IP address to confirm compliance with the terms and conditions, to analyse how you engage with the information contained in this communication, and to share such analysis on an anonymised basis with others as part of our commercial services. For further information about how RNS and the London Stock Exchange use the personal data you provide us, please see our Privacy Policy.
END
NRAEADKXASPAEFA
(END) Dow Jones Newswires
June 28, 2022 02:00 ET (06:00 GMT)
1 Year St James House Chart |
1 Month St James House Chart |
It looks like you are not logged in. Click the button below to log in and keep track of your recent history.
Support: +44 (0) 203 8794 460 | support@advfn.com
By accessing the services available at ADVFN you are agreeing to be bound by ADVFN's Terms & Conditions