ADVFN Logo ADVFN

We could not find any results for:
Make sure your spelling is correct or try broadening your search.

Trending Now

Toplists

It looks like you aren't logged in.
Click the button below to log in and view your recent history.

Hot Features

Registration Strip Icon for discussion Register to chat with like-minded investors on our interactive forums.

AI. Aero Inventory

264.00
0.00 (0.00%)
02 May 2024 - Closed
Delayed by 15 minutes
Share Name Share Symbol Market Type Share ISIN Share Description
Aero Inventory LSE:AI. London Ordinary Share GB0004440847 ORD 1.25P
  Price Change % Change Share Price Bid Price Offer Price High Price Low Price Open Price Shares Traded Last Trade
  0.00 0.00% 264.00 - 0.00 01:00:00
Industry Sector Turnover Profit EPS - Basic PE Ratio Market Cap
0 0 N/A 0

Aero Inventory Share Discussion Threads

Showing 2901 to 2913 of 3175 messages
Chat Pages: 127  126  125  124  123  122  121  120  119  118  117  116  Older
DateSubjectAuthorDiscuss
27/11/2017
11:54
(CercleFinance.com) - The Swiss-Swedish engineering group ABB announces an agreement with the Japanese group Kawasaki Heavy Industries, with the aim of sharing knowledge and promoting the benefits of collaborative robots ('cobots').

The cooperation will focus on robots with double arm designs, which allow tasks like assembling small electronic devices. A first demonstration is planned at the International Robot Exhibition, to be held in Tokyo this week.

grupo
27/11/2017
09:54
How about an etf run by robots?

NOT a suggestion but clearly a reality.

hazl
25/11/2017
21:39
quant_investor
25 Nov '17 - 21:16 - 75 of 75 0 0
Polar Capital have announced they are launching a fund.



There is also an ETF for Robotics, Automation, and AI (I have a position here)

sarkasm
25/11/2017
21:16
Polar Capital have announced they are launching a fund.

hxxp://www.whatinvestment.co.uk/polar-capital-launches-ai-fund-looks-to-outperform-global-equities-benchmark-2554302/

There is also an ETF for Robotics, Automation, and AI (I have a position here)

hxxps://www.etfsecurities.com/retail/uk/en-gb/robo-global-robotics-and-automation-go-ucits-etf

quant_investor
25/11/2017
20:45
Any AI funds worth purchasing?
pallys
20/11/2017
16:35
85% UK businesses to invest in artificial intelligence by 2020
Posted on 20 Nov 2017 by Marc Hauschild THE MANUFACTURER

Almost nine in ten (85%) of senior executives plan to invest in artificial intelligence and the internet of things by 2020, according to new survey.
85% of senior executives plan to invest in artificial intelligence (AI) and the internet of things (IoT) by 2020.

The findings come from the first edition of a new regular report the Digital Disruption Index.

The survey, conducted by Deloitte, will track investment in digital technologies and create a detailed picture of their impact on the largest and most influential business and public-sector bodies.

The first edition includes responses from 51 organisations with a combined market value of £229bn.

Over half of survey respondents expect that by 2020, they will invest more than £10m in digital technologies and ways of working – such as AI, cloud, robotics, blockchain, analytics, the IoT, and virtual and augmented reality.

More than 70% say they will invest in robotics, 63% in augmented and virtual reality, 62% in wearables, 54% in biometrics (such as voice and finger recognition), and 43%in blockchain.

This year alone, 30% of UK organisations will invest more than £10m in these technologies. But when compared with corporate IT budgets this represents a rather modest amount of investment. According to separate Deloitte research, the majority of IT functions have budgets of over £20m, while a quarter of corporate IT functions spend more than £75m annually.

As a likely consequence, at this stage only 9% of executives believe that UK companies are world leading at exploring and implementing digital technologies and ways of working.

Paul Thompson, UK digital transformation leader at Deloitte, explains: “The first edition of the index shows that few UK businesses are successfully exploiting digital technologies and ways of working. Strategies are not coherent, investment levels are modest and the relevant skills are in short supply. As a result, the UK isn’t living up to its digital potential.”
Rise of artificial intelligence

Of all of the digital technologies surveyed, executives believe AI will have the biggest impact on their organisations in the future. Investment in AI at this stage remains modest though, with only 22% having already invested in it.

Of those that have invested, only a third expect to spend more than £1m in 2017. This suggests that organisations are currently testing with pilots, rather than large scale deployments.

Almost 80% of leaders (77%) expect AI to disrupt their industries. Just under half expect their workforce to get smaller as they adopt AI, but only 8% believe AI will directly replace human activity. Over a third believe AI will be used to augment expertise, with a focus on improving human decision-making.

Thompson adds: “AI will have a profound impact on the future of work. Our view is that human and machine intelligence complement each other, and that AI should not simply be seen as a substitute. Humans working with AI will achieve better outcomes than AI alone, and UK businesses need to get this careful balance right.”
The digital skills gap

Only 20% of leaders believe there are enough school leavers and graduates entering the labour market with the appropriate digital skills and experience.

Three-quarters of executives face challenges recruiting the right digital skills. Leaders say data scientists and analysts are the most important roles for a successful digital strategy, but these are also the most difficult digital roles to retain and recruit.

Over half of leaders say their organisation’s learning and development curriculum does not support their digital strategy. 45% say that their organisation does not provide them and other leaders with the resources needed, such as training and coaching through a variety of multimedia channels, to develop their own digital skills.

Thompson concludes: “The UK has the opportunity to be a market-leader in harnessing and exploiting the opportunities digital brings, including increased productivity and driving growth. But it’s clear that digital skills, at all levels, are in short supply and high demand.

“In our view, digital represents both a business and public policy issue which needs educators, policymakers and business to work more closely together, in order to meet the current and future demands of the UK’s economy.”

maywillow
19/11/2017
12:26
HAZL

know what you mean


i am of the age where the parts need changing

need a new back at present as it seems to have painful twinges if i make a false movement

ouch ouch

sarkasm
19/11/2017
12:21
THANK YOU Sarkasm for posting my link!
Just haven't the energy at the moment!

hazl
19/11/2017
11:42
SOPHIE




INCREDIBLE HAZL

sarkasm
19/11/2017
11:24
I can quite see a time when we are told that we must treat robots with respect in the same way as politically correct issues are addressed today.
hazl
19/11/2017
11:08
Have you seen that video including the robot on the U Nations conference a few days back ....it's on
my post 2117 on PRSM thread..sarkasm?


I agree that Boston Dynamics videos, with the work they are doing, is incredible.
I found the UN one around the same time it is very new... this month.

I find myself both in awe of the technology but at the same time,fearful of the strides they are making without the safety-net of experience.
Scientists need to collaborate with those that have the fore-sight to have an over-view so that ethics and common-sense can work together in everybody's best interests.

hazl
15/11/2017
05:24
Artificial intelligence and the stability of markets

Jon Danielsson 15 November 2017

Artificial intelligence is increasingly used to tackle all sorts of problems facing people and societies. This column considers the potential benefits and risks of employing AI in financial markets. While it may well revolutionise risk management and financial supervision, it also threatens to destabilise markets and increase systemic risk.
1

a

A
Related

The new spring of artificial intelligence
Jacques Bughin, Eric Hazan
Cyber risk as systemic risk
Jon Danielsson, Morgane Fouché, Robert Macrae
Systemic risk: A research and policy agenda
Jon Danielsson, Jean-Pierre Zigrand
Modelling financial turmoil through endogenous risk
Jon Danielsson, Hyun Song Shin, Jean-Pierre Zigrand

Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.

However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).
Risk management and micro-prudential supervision

In successful large-scale applications, an AI engine exercises control over small parts of an overall problem, where the global solution is simply aggregated sub-solutions. Controlling all of the small parts of a system separately is equivalent to controlling the system in its entirety. Risk management and micro-prudential regulations are examples of such a problem.

The first step in risk management is the modelling of risk and that is straightforward for AI. This involves the processing of market prices with relatively simple statistical techniques, work that is already well under way. The next step is to combine detailed knowledge of all the positions held by a bank with information on the individuals who decide on those positions, creating a risk management AI engine with knowledge of risk, positions, and human capital.

While we still have some way to go toward that end, most of the necessary information is already inside banks' IT infrastructure and there are no insurmountable technological hurdles along the way.

All that is left is to inform the engine of a bank’s high-level objectives. The machine can then automatically run standard risk management and asset allocation functions, set position limits, recommend who gets fired and who gets bonuses, and advise on which asset classes to invest in.

The same applies to most micro-prudential supervision. Indeed, AI has already spawned a new field called regulation technology, or ‘regtech’;.

It is not all that hard to translate the rulebook of a supervisory agency, now for most parts in plain English, into a formal computerised logic engine. This allows the authority to validate its rules for consistency and gives banks an application programming interface to validate practices against regulations.

Meanwhile, the supervisory AI and the banks’ risk management AI can automatically query each other to ensure compliance. This also means that all the data generated by banks becomes optimally structured and labelled and automatically processable by the authority for compliance and risk identification.

There is still some way to go before the supervisory/risk management AI becomes a practical reality, but what is outlined above is eminently conceivable given the trajectory of technological advancement. The main hindrance is likely to be legal, political, and social rather than technological.

Risk management and micro-prudential supervision are the ideal use cases for AI – they enforce compliance with clearly defined rules, and processes generating vast amounts of structured data. They have closely monitored human behaviour, precise high-level objectives, and directly observed outcomes.

Financial stability is different. There the focus is on systemic risk (Danielsson and Zigrand 2015), and unlike risk management and micro-prudential supervision, it is necessary to consider the risk of the entire financial system together. This is much harder because the financial system is for all practical purposes infinitely complex and any entity – human or AI – can only hope to capture a small part of that complexity.

The widespread use of AI in risk management and financial supervision may increase systemic risk. There are four reasons for this.
1. Looking for risk in all the wrong places

Risk management and regulatory AI can focus on the wrong risk – the risk that can be measured rather than the risk that matters.

The economist Frank Knight established the distinction between risk and uncertainty in 1921. Risk is measurable and quantifiable and results in statistical distributions that we can then use to exercise control. Uncertainty is none of these things. We know it is relevant but we can't quantify it, so it is harder to make decisions.

AI cannot cope well with uncertainty because it is not possible to train an AI engine against unknown data. The machine is really good at processing information about things it has seen. It can handle counterfactuals when these arise in systems with clearly stated rules, like with Google’s AlphaGo Zero (Silver et al. 2017). It cannot reason about the future when it involves outcomes it has not seen.

The focus of risk management and supervision is mostly risk, not uncertainty. An example is the stock market and we are well placed to manage the risk arising from it. If the market goes down by $200 billion today it is going to have a minimal impact because it is a known risk.

Uncertainty captures the danger we don't know is out there until it is too late. Potential, unrealised losses of less than $200 billion on subprime mortgages in 2008 brought the financial system to its knees. If there are no observations on the consequences of subprime mortgages put into CDOs with liquidity guarantees, there is nothing to train on. The resulting uncertainty will be ignored by AI.

While human risk managers and supervisors can also miss uncertainty, they are less likely to. They can evaluate current and historical knowledge with experience and theoretical frameworks, something AI can’t do.
2. Optimisation against the system

A large number of well-resourced economic agents have strong incentives to take very large risks that have the potential to deliver them large profits at the expense of significant danger to their financial institutions and the system at large. That is exactly the type of activity that risk management and supervision aim to contain.

These agents are optimising against the system, aiming to undermine control mechanisms in order to profit, identifying areas where the controllers are not sufficiently vigilant.

These hostile agents have an inherent advantage over those who are tasked with keeping them in check because each only has to solve a small local problem and their computational burden is much lower than that of the authority. There could be many agents simultaneously doing this and we may need few, even only one, to succeed for a crisis to ensue. Meanwhile, in an AI arms race, the authorities probably lose out to private sector computing power.

While this problem has always been inherent in risk management and supervision, it is likely to become worse the more AI takes over core functions. If we believe AI is doing its job, where we cannot verify how it reasons (which is impossible with AI), and only monitor outputs, we have to trust it. If then it appears to manage without big losses, it will earn our trust.

If we don't understand how an AI supervisory/risk management engine reasons we better make sure to specify its objective function correctly and exhaustively.

Paradoxically, the more we trust AI to do its job properly, the easier it can be to manipulate and optimise against the system. A hostile agent can learn how the AI engine operates, take risk where it is not looking, game the algorithms and hence undermine the machine by behaving in a way that avoids triggering its alarms or even worse, nudges it to look away.
3. Endogenous complexity

Even then, the AI engine working on the behest of the macroprudential authority might have a fighting chance if the structure of the financial system remained constant, so that the problem is simply of sufficient computational resources.

But it isn't. The financial system constantly changes its dynamic structure simply because of the interaction of the agents that make up the system, many of whom ae optimising against the system and deliberately creating hidden complexities. This is the root of what we call endogenous risk. (Danielsson et al. 2009).

The complexity of the financial system is endogenous, and that is why AI, even conceptually, can’t efficiently replace the macro-prudential authority in the way it can supersede the micro-prudential authority.
4. Artificial intelligence is procyclical

Systemic risk is increased by homogeneity. The more similar our perceptions and objectives are, the more systemic risk we create. Diverse views and objectives dampen out the impact of shocks and act as a countercyclical stabilising, systemic risk minimising force.

Financial regulations and standard risk management practices inevitably push towards homogeneity. AI even more so. It favours best practices and standardised best-of-breed models that closely resemble each other, all of which, no matter how well-intentioned and otherwise positive, also increases pro-cyclicality and hence systemic risk.
Conclusion

Artificial intelligence is useful in preventing historical failures from repeating and will increasingly take over financial supervision and risk management functions. We get more coherent rules and automatic compliance, all with much lower costs than current arrangements. The main obstacle is political and social, not technological.

From the point of view of financial stability, the opposite conclusion holds.

We may miss out on the most dangerous type of risk-taking. Even worse, AI can make it easier to game the system. There may be no solutions to this, whatever the future trajectory of technology. The computational problem facing an AI engine will always be much higher than that of those who seek to undermine it, not the least because of endogenous complexity.

Meanwhile, the very formality and efficiency of the risk management/supervisory machine also increases homogeneity in belief and response, further amplifying pro-cyclicality and systemic risk.

The end result of the use of AI for managing financial risk and supervision is likely to be lower volatility but fatter tails; that is, lower day-to-day risk but more systemic risk.
References

Danielsson, J and J-P Zigrand (2015), "A proposed research and policy agenda for systemic risk”, VoxEU.org, 7 August.

Danielsson, J, H S Shin and J-P Zigrand (2009), “Modelling financial turmoil through endogenous risk”, VoxEU.org, 11 March.

Danielsson, J, R Macrae and A Uthemann (2017), "Artificial intelligence, financial risk management and systemic risk", LSE Systemic Risk Centre special paper 13.

Knight, F H (1921), Risk, Uncertainty and Profit, Houghton Mifflin.

Silver, D et al. (2017), "Mastering the game of Go without human knowledge", Nature 550: 354-359.

maywillow
07/11/2017
06:32
RSS Feed
News
More Articles: Latest Popular Archives
Two thirds of UK office workers want a personal AI powered assistant
Article by: Adobe | Published: 7 November 2017
AI

According to research from Adobe, two thirds (66 percent) of UK office workers want to use AI technology at work so that they can have their very own personal assistant to share everyday tasks with.

Surveying 2,000 full-time and part-time office professionals in the UK, Adobe’s study reveals that far from fearing for their future careers, over two thirds (68 percent) of respondents say they aren’t phased by the growth of advanced technologies like artificial intelligence (AI), as they feel their role will still need human abilities that technology can’t replace.

AI: You’re Hired!
Most office workers view technology in the workplace as a positive force, with the majority (86 percent) saying it already improves their working day, helping them to be more productive (85 percent), and enabling them to connect with their co-workers (78 percent). The top tasks that respondents wanted AI assistance with, include: Reminders of projects or appointments (46 percent); Help with research on a work topic (36 percent); Searches of electronic documents for information (30 percent);

Despite wanting an AI assistant to help with more admin-based tasks, workers are less eager to use them for more strategic tasks, with only: 16 percent of people willing to use AI for creative suggestions or ideas for writing content; 16 percent wishing to use AI for feedback on tone or style of emails or longer-form documents; 10 percent welcoming suggestions from AI on how to grow their network of colleagues.

Mark Greenaway, Head of Emerging Business EMEA at Adobe, said: “The research clearly shows that UK office workers are very open to embracing advanced technology like AI to augment their working day. Considering the often sensationalistic and innacurate reports given about AI technology, and its impact on our lives, it’s important that workers remember that AI can help make their lives easier, so they have more free time innovating and being productive.”

Surviving a technology-rich future
UK office workers believe that 60 percent of admin based office tasks will be done by technology in the next 20 years. As a result, the majority (87 percent) also predict that their job will change in the next five years. Given the expected growth in workplace technology and uncertainty of how exactly it will impact jobs, only 19 percent say they feel ‘very equipped’ to deal with advanced technology.

Mark Greenaway of Adobe continues, “Humans don’t feel like they’re just a cog in a machine. Our study shows that office workers are confident that they’ll continue to matter in the workplace, even in a world of fast-developing technology. “Our findings suggest that people are open to change, but they also show that workers want to be confident when using new technologies – currently the majority don’t feel they have the skills necessary to do so – so more needs to be done by businesses. As long as employees adopt a learn-it-all mindset, and companies design user-centric technology that’s intuitive, technology and work patterns should evolve hand in hand.”

the grumpy old men
Chat Pages: 127  126  125  124  123  122  121  120  119  118  117  116  Older

Your Recent History

Delayed Upgrade Clock