Share Name Share Symbol Market Type Share ISIN Share Description
Imagination Technologies Group LSE:IMG London Ordinary Share GB0009303123 ORD 10P
  Price Change % Change Share Price Bid Price Offer Price High Price Low Price Open Price Shares Traded Last Trade
  -4.00p -3.81% 101.00p 100.75p 101.25p 103.50p 100.25p 100.25p 673,651.00 11:54:27
Industry Sector Turnover (m) Profit (m) EPS - Basic PE Ratio Market Cap (m)
Technology Hardware & Equipment 120.0 -63.2 -29.8 - 286.15

Imagination Technologies Share Discussion Threads

Showing 41326 to 41348 of 41350 messages
Chat Pages: 1654  1653  1652  1651  1650  1649  1648  1647  1646  1645  1644  1643  Older
DateSubjectAuthorDiscuss
27/4/2017
10:33
hxxps://www.imgtec.com/news/press-release/imagination-shatters-price-barriers-to-soc-embedded-software-development/
orkney
27/4/2017
10:03
What, 98000 sold, 177000 bought thus far
orkney
27/4/2017
09:55
Heavy selling here today. Looks as though the decisive break down below £1 may be about to happen. Gap down then to 60p.
mallorca 9
24/4/2017
12:07
Without knowing otherwise, there is a chance that Apple may infringe upon IMG's IP and if so IMG would have some presumably lucrative leverage. But if it is assumed that Apple have dotted the i's and crossed the t's and that their technology doesn't make any such infringement on IMG or anyone else's IP. If it were also assumed that their resulting graphics technology is far better than IMG's and the rest of the competition then wouldn't IMG have greater opportunity to get into the high volume lower/middle end of the market which is now going to need a much higher level graphics engine so as to not look prehistoric against Apple?
stunger
24/4/2017
06:58
Rossi, I've just returned from holiday and have read your post ... can you please help me to understand how this can be fairly valued at nearly £300m ?
mallorca 9
23/4/2017
11:04
Borromini, Apple only gpu. Based upon Apple statement 15-24 months, I would have thought it is highly unlikely.
orkney
23/4/2017
09:27
Sunday times says img is considering selling a stake in mips to plug funding gap.
willie99
23/4/2017
09:01
TSMC reported as fabricating and delivering 50 million A11 chips between April and July 2017 and 100 million by end of 2017. Does anyone think these will contain an Apple only GPU with no IMG IP? hTtp://appleinsider.com/articles/17/03/27/tsmc-gearing-up-for-a11-chip-production-likely-for-iphone-7s-and-iphone-8----report
borromini1
23/4/2017
08:38
Sheep_Herder - Having got to the position that a dedicated GPU/GPGPU is best for graphics and good for tasks such as machine learning and a dedicated parallel compute engine is best for compute and could be good for graphics... Two questions, first can you provide lists or article links of the best applications of and examples of existing parallel compute engines at any size but also mobile size? Second at what point would a combination of both chips be most effective for different power and area formats of mobile phone, tablet, laptop, and how similar is such a solution compared to other existing two chip complimentary combination examples such as in ARM's big/little approach or in laptops the integrated graphics combined with dedicated graphics card approach where entire pipelines are kind of replicated to address power and performance issues?
borromini1
21/4/2017
14:36
I'm not going to bother any more after this but the public release of an API is long after the companies involved have hashed it out for years. The reason IMG, ARM, AMD, Intel, NVidia etc pay lots of money to sit on these panels is so that they can get their features into each release and give their input on what they think should and should not go in. Vulkan was a very quick turn around as it was in response to Metal. The architectures and ISA for a GPU are defined in conjunction with the API development so that there are no surprises. It's no use building something that won't support the upcoming API's for 3 years after the GPU is released. That would be a very expensive mistake. hTtps://www.khronos.org/news/press/khronos-releases-vulkan-1-0-specification
sheep_herder
21/4/2017
13:45
“You cannot build a GPU for a graphics API and then hope to some day run OpenCL on it.” Then why have we seen GPU’s predate a release of an API by over a year and a half and still fully support the new API? Depending on the features the API adds all or parts of new API can be supported at a later date. For example the SGX543 predates the release of OpenCL 1.1 by well over a year and supports 1.1. Take Vulkan it’s a new API that works on GPU's that got designed 2+ years before Vulkan was released. I agree you can optimised a GPU for compute what I disagree with is how big a change this is. Relatively speaking compared to the rest of the GPU it’s not a massive change with loads of hardware sitting idling away doing nothing while you render graphics. A lot of what is useful for compute is also useful to render graphics. Removing GPGPU support doesn’t mean you can remove all that hardware you use to do GPGPU as you still use a lot of that hardware for graphics. The way I see it is to make a 2nd chip for compute only that cannot render graphics you have to duplicate hardware you already have in the GPU. The way forward is integration not separating things out. Thank you for coming back and explaining things in detail. I missed graphics discussions like this even if we don’t agree on how much hardware can be removed to lose GPGPU support. I agree some hardware can be removed if you lose GPGPU support. But how much hardware would you estimate is used both in GPGPU and graphics and would have to be duplicated in a dedicated compute only, no graphics chip?
pottsey
20/4/2017
23:08
That not very helpful, if I made a technical error please point it out and I will correct myself. I know I sometimes make mistakes so I just reread up on GPGPU and everything I have says it’s fundamentally a software concept. You sometimes get a few small hardware changes like muti level caches to optimize for mainstream compute but at the core of it GPGPU is a type of algorithm, not a piece of equipment. After reading the documents on GPGU it appear as though its you who is wrong not me. Take the SGX GPU or any other GPU which had a GPGPU API like OpenCL added at a later date without any hardware changes. You can split a GPU into two core parts a fixed function area and a programmable hardware area. That programmable hardware area is used to render modern graphics like shaders and do GPGPU as they are the same thing. By improving the programmable hardware area you are both improving the rendering of graphics and GP Compute power. They are related as they are the same hardware for the most part. Like I said before there are a few small optimisations but it’s not a case of tons of extra hardware added to do GPGPU like you said. You don’t get tons of GPGPU hardware sitting idling away when rendering graphics just like you don’t get programmable graphics hardware idling away when doing GPGPU as mostly its all the same bit of hardware bar a few tiny changes. The basic version is the programmable hardware for rendering graphics like shaders is the compute hardware that does GPGPU. You cannot remove this hardware from the GPU as then you cannot render graphics. So by making a 2nd compute only chip that cannot render graphics you are duplicating hardware you already have inside the GPU. All the 2nd chip is doing is wasting die space and wasting energy for no benefit. If you need more compute power you don't waste die space duplicating what you already have you use that space to improve the current GPU programmable hardware that does GPGPU. Both GPGPU tech docs and the GPGPU wiki page confirm what I have said.
pottsey
20/4/2017
21:03
Bored now. You can lead a horse to water... Edit: the sad thing is that you've got 3 likes which means not only are you misguided but there are others that are falling for it too. Shame really.
sheep_herder
20/4/2017
18:30
“A GPU that is built to only run graphics API's, by definition, will be simpler to implement than one that must support said graphics API plus a compute API.” A GPU that is built without compute is useless for rendering todays graphics even that pipline you linked to is made up of compute hardware. Yes you could build a GPU that cannot do GPGPU but there is only a tiny amount of difference as most of the hardware you use for the compute API is already the same hardware you use and need in a modern day GPU. There is very little difference. Removing GPGPU support doesn't remove the compute hardware as you still need the compute hardware to render graphics. Hence why GPGPU is a software concept not a large change in hardware. “That extra logic that isn't in use when rendering, or alternatively running compute jobs, is sat there at best idling away burning leakage power, and at worst toggling away doing nothing burning dynamic power too.” For the most part it’s the same hardware that renders graphics and does GP compute work. The compute hardware is not sitting there idling away when you do graphics rendering. It’s used for rendering. You cannot remove compute from the GPU as it’s a core part of what needed to render modern graphics. So why duplicated all that compute hardware you already have and waste space and power duplicating functions in a 2nd chip? Unless you have some sort of device that doesn’t need to render graphics there is zero reason to have a compute only chip with all the graphics rendering bits removed. “A graphics rendering pipeline like OpenGL, for example. Maybe this will give you a clue: hTtps://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview” That agrees with what I said. EDIT: The 4 blue stages in that pipeline are done on programmable compute hardware even though that is not an GPGPU API.
pottsey
20/4/2017
18:05
Pottsey, oh dear. A graphics rendering pipeline like OpenGL, for example. Maybe this will give you a clue: hTtps://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview You can implement that graphics pipeline in hardware much more efficiently if you don't have to support the GPGPU side, think OpenCL. Adding support for features isn't free. Adding more compute elements to attain a high compute throughput isn't free. Trying to fit that extra logic into a standard rendering pipeline isn't free. Adding support to the driver stack and the front end control element to process that extra overhead isn't free. A GPU that is built to only run graphics API's, by definition, will be simpler to implement than one that must support said graphics API plus a compute API. That extra logic that isn't in use when rendering, or alternatively running compute jobs, is sat there at best idling away burning leakage power, and at worst toggling away doing nothing burning dynamic power too. I don't have time to teach you how to design a chip sorry so I'll leave it there.
sheep_herder
20/4/2017
16:08
Yes it is a software concept and loads of extra hardware is not added as the same hardware is already needed to render graphics. Take away compute from a GPU and you cannot render modern day graphics. The extra compute logic is not automatically redundant during modern graphics execution. Sharders for example which these days make up a very large part of graphics rendering are all done by the compute hardware. I am not sure which old traditional graphics rendering pipeline you are talking about as pre sharder days was around 15 years ago. The current system is an optimised system and I don't see what big PPA trade offs there are to be made.
pottsey
20/4/2017
13:29
Pottsey - GPGPU is a software concept? Eh? A GPU that can function as a compute engine has a load of extra hardware added to provide this functionality. You have a load of extra thread handling logic; a load more multiplexing paths and register file access paths that are different from the traditional GPU rendering pipeline; you'll have a whole bunch more arithmetic units in order to provide the performance you need; you may favour integer or floating point units; etc etc. Then you have the added driver complexity on top of that. So no, it's not a "software concept", it's a combined system aimed at being able to execute both a traditional graphics rendering pipeline and being able to handle pure compute jobs. As such it is not an optimised system and there are big PPA trade offs to be made. All that extra compute logic is redundant during most of the graphics execution and the same is true vice versa. The texturing units are never used for compute. All that stuff sitting there unused is not something anyone wants if they can help it. As for RT, the mobile power budget is around 5W total and any Apple compute engine would obviously have to come in under that budget. I agree that RT looks nice but will anyone outside of a mains powered device use it? I doubt it. I'm expecting the IMG solution to go the same way as Betamax.
sheep_herder
20/4/2017
12:54
“There's a big difference between the energy efficiency of running a compute task on a GPGPU compared to a dedicated and optimised engine.” That doesn’t make sense to me first GPGPU is a software concept not a hardware concept and 2nd a modern day GPU is a dedicated and optimised massively parallel compute engine. I don’t understand what you are trying to say, just like I don’t understand why you keep saying RT is pointless for mobile. Now I can understand thinking RT might not take off but it has clear benefits. Its also been proven that IMG RT GPU can do in under 10 watt what a high end dedicated 200+watt parallel compute engine can do. So I dont see Apple useing a high end compute engine to do RT more efficiently.
pottsey
20/4/2017
12:11
There's a big difference between the energy efficiency of running a compute task on a GPGPU compared to a dedicated and optimised engine. Assuming such an engine was also capable of running Metal graphics jobs, albeit less efficiently to compute jobs, then it's purely a trade off to be made by Apple. But if they see growth in compute gaining traction and overtaking graphics then it's an easy decision.
sheep_herder
20/4/2017
12:03
Sheep_Herder - Get the notion of parallel compute engine vs dedicated ray tracing engine and their differing efficiencies. Doesn't Apple already use Rogue GPU for Siri on iPhones, IMG already seem focused on compute capabilities.
borromini1
20/4/2017
11:48
It appears this thread is getting as much activity from long term non-holders as it is from holders. It's almost like some people have decided to pick-up where JJ left off. In relation to Sheep_herders "RT in the way IMG was doing it is a no go", carmack's assessment was the polar opposite. (Man behind Doom, Quake, and involved in Occulus VR technology) His various twitter posts on the subject include:- "yes, that is a very important point, and IMG was very smart about leveraging the existing GPU hardware." "I had reviewed some ray tracing hardware before the PVR stuff that was laughably unsuitable to the proposed gaming applications." "I am very happy with the advent of the PVR Wizard ray tracing tech. RTRT HW from people with a clue!" I would tend to take carmack's assessment of the Wizard RT, over much near anyone elses. It's one thing having the right solution, but another to get someone to take the jump to RT.
twatcher
20/4/2017
11:46
Erm, that's exactly why I posted it, as further evidence that there is no future for IMG and Apple's relationship.
sheep_herder
20/4/2017
11:11
Sheep_Herder - Recruiters recruit. I'm sure the Apple recruiter placed that link to avoid having the liability of making their own statement on Apple's intent. You can hear them say I can't tell you what we are doing, appleinsider haven't got it all correct but it gives you the gist of where we might be heading, leaving any prospective employee to fill in the gaps.
borromini1
Chat Pages: 1654  1653  1652  1651  1650  1649  1648  1647  1646  1645  1644  1643  Older
Your Recent History
LSE
GKP
Gulf Keyst..
LSE
QPP
Quindell
FTSE
UKX
FTSE 100
LSE
IOF
Iofina
FX
GBPUSD
UK Sterlin..
Stocks you've viewed will appear in this box, letting you easily return to quotes you've seen previously.

Register now to create your own custom streaming stock watchlist.

By accessing the services available at ADVFN you are agreeing to be bound by ADVFN's Terms & Conditions

P:34 V: D:20170427 11:10:34