ADVFN Logo ADVFN

We could not find any results for:
Make sure your spelling is correct or try broadening your search.

Trending Now

Toplists

It looks like you aren't logged in.
Click the button below to log in and view your recent history.

Hot Features

Registration Strip Icon for alerts Register for real-time alerts, custom portfolio, and market movers

IMG Imagination Technologies Group

181.25
0.00 (0.00%)
26 Apr 2024 - Closed
Delayed by 15 minutes
Share Name Share Symbol Market Type Share ISIN Share Description
Imagination Technologies Group LSE:IMG London Ordinary Share GB0009303123 ORD 10P
  Price Change % Change Share Price Bid Price Offer Price High Price Low Price Open Price Shares Traded Last Trade
  0.00 0.00% 181.25 181.50 181.75 - 0.00 01:00:00
Industry Sector Turnover Profit EPS - Basic PE Ratio Market Cap
0 0 N/A 0

Imagination Technologies Share Discussion Threads

Showing 41176 to 41193 of 43000 messages
Chat Pages: Latest  1648  1647  1646  1645  1644  1643  1642  1641  1640  1639  1638  1637  Older
DateSubjectAuthorDiscuss
20/4/2017
18:30
“A GPU that is built to only run graphics API's, by definition, will be simpler to implement than one that must support said graphics API plus a compute API.”
A GPU that is built without compute is useless for rendering todays graphics even that pipline you linked to is made up of compute hardware. Yes you could build a GPU that cannot do GPGPU but there is only a tiny amount of difference as most of the hardware you use for the compute API is already the same hardware you use and need in a modern day GPU. There is very little difference. Removing GPGPU support doesn't remove the compute hardware as you still need the compute hardware to render graphics. Hence why GPGPU is a software concept not a large change in hardware.


“That extra logic that isn't in use when rendering, or alternatively running compute jobs, is sat there at best idling away burning leakage power, and at worst toggling away doing nothing burning dynamic power too.”
For the most part it’s the same hardware that renders graphics and does GP compute work. The compute hardware is not sitting there idling away when you do graphics rendering. It’s used for rendering.

You cannot remove compute from the GPU as it’s a core part of what needed to render modern graphics. So why duplicated all that compute hardware you already have and waste space and power duplicating functions in a 2nd chip? Unless you have some sort of device that doesn’t need to render graphics there is zero reason to have a compute only chip with all the graphics rendering bits removed.

“A graphics rendering pipeline like OpenGL, for example. Maybe this will give you a clue:
That agrees with what I said. EDIT: The 4 blue stages in that pipeline are done on programmable compute hardware even though that is not an GPGPU API.

pottsey
20/4/2017
18:05
Pottsey, oh dear.

A graphics rendering pipeline like OpenGL, for example. Maybe this will give you a clue:

You can implement that graphics pipeline in hardware much more efficiently if you don't have to support the GPGPU side, think OpenCL. Adding support for features isn't free. Adding more compute elements to attain a high compute throughput isn't free. Trying to fit that extra logic into a standard rendering pipeline isn't free. Adding support to the driver stack and the front end control element to process that extra overhead isn't free.

A GPU that is built to only run graphics API's, by definition, will be simpler to implement than one that must support said graphics API plus a compute API. That extra logic that isn't in use when rendering, or alternatively running compute jobs, is sat there at best idling away burning leakage power, and at worst toggling away doing nothing burning dynamic power too.

I don't have time to teach you how to design a chip sorry so I'll leave it there.

sheep_herder
20/4/2017
16:08
Yes it is a software concept and loads of extra hardware is not added as the same hardware is already needed to render graphics. Take away compute from a GPU and you cannot render modern day graphics.

The extra compute logic is not automatically redundant during modern graphics execution. Sharders for example which these days make up a very large part of graphics rendering are all done by the compute hardware. I am not sure which old traditional graphics rendering pipeline you are talking about as pre sharder days was around 15 years ago. The current system is an optimised system and I don't see what big PPA trade offs there are to be made.

pottsey
20/4/2017
13:29
Pottsey - GPGPU is a software concept? Eh?

A GPU that can function as a compute engine has a load of extra hardware added to provide this functionality. You have a load of extra thread handling logic; a load more multiplexing paths and register file access paths that are different from the traditional GPU rendering pipeline; you'll have a whole bunch more arithmetic units in order to provide the performance you need; you may favour integer or floating point units; etc etc.

Then you have the added driver complexity on top of that. So no, it's not a "software concept", it's a combined system aimed at being able to execute both a traditional graphics rendering pipeline and being able to handle pure compute jobs.

As such it is not an optimised system and there are big PPA trade offs to be made. All that extra compute logic is redundant during most of the graphics execution and the same is true vice versa. The texturing units are never used for compute. All that stuff sitting there unused is not something anyone wants if they can help it.

As for RT, the mobile power budget is around 5W total and any Apple compute engine would obviously have to come in under that budget. I agree that RT looks nice but will anyone outside of a mains powered device use it? I doubt it. I'm expecting the IMG solution to go the same way as Betamax.

sheep_herder
20/4/2017
12:54
“There's a big difference between the energy efficiency of running a compute task on a GPGPU compared to a dedicated and optimised engine.”
That doesn’t make sense to me first GPGPU is a software concept not a hardware concept and 2nd a modern day GPU is a dedicated and optimised massively parallel compute engine.

I don’t understand what you are trying to say, just like I don’t understand why you keep saying RT is pointless for mobile. Now I can understand thinking RT might not take off but it has clear benefits.

Its also been proven that IMG RT GPU can do in under 10 watt what a high end dedicated 200+watt parallel compute engine can do. So I dont see Apple useing a high end compute engine to do RT more efficiently.

pottsey
20/4/2017
12:11
There's a big difference between the energy efficiency of running a compute task on a GPGPU compared to a dedicated and optimised engine. Assuming such an engine was also capable of running Metal graphics jobs, albeit less efficiently to compute jobs, then it's purely a trade off to be made by Apple. But if they see growth in compute gaining traction and overtaking graphics then it's an easy decision.
sheep_herder
20/4/2017
12:03
Sheep_Herder - Get the notion of parallel compute engine vs dedicated ray tracing engine and their differing efficiencies. Doesn't Apple already use Rogue GPU for Siri on iPhones, IMG already seem focused on compute capabilities.
borromini1
20/4/2017
11:48
It appears this thread is getting as much activity from long term non-holders as it is from holders. It's almost like some people have decided to pick-up where JJ left off.

In relation to Sheep_herders "RT in the way IMG was doing it is a no go", carmack's assessment was the polar opposite. (Man behind Doom, Quake, and involved in Occulus VR technology)

His various twitter posts on the subject include:-

"yes, that is a very important point, and IMG was very smart about leveraging the existing GPU hardware."

"I had reviewed some ray tracing hardware before the PVR stuff that was laughably unsuitable to the proposed gaming applications."

"I am very happy with the advent of the PVR Wizard ray tracing tech. RTRT HW from people with a clue!"

I would tend to take carmack's assessment of the Wizard RT, over much near anyone elses.

It's one thing having the right solution, but another to get someone to take the jump to RT.

twatcher
20/4/2017
11:46
Erm, that's exactly why I posted it, as further evidence that there is no future for IMG and Apple's relationship.
sheep_herder
20/4/2017
11:11
Sheep_Herder - Recruiters recruit. I'm sure the Apple recruiter placed that link to avoid having the liability of making their own statement on Apple's intent. You can hear them say I can't tell you what we are doing, appleinsider haven't got it all correct but it gives you the gist of where we might be heading, leaving any prospective employee to fill in the gaps.
borromini1
20/4/2017
10:10
meanwhile the shareprice seems to have switched from support at 100p to resistance at 100p. From a trading point of view quite risky now
mister md
20/4/2017
10:03
I actually hadn't thought about RT borro. RT in the way IMG was doing it is a no go but if the rumours are correct, and Apple are developing a parallel compute engine that favours compute over traditional graphics duty, then they may be able to do RT more efficiently.

But overall, I'm still of the opinion that RT is pointless in mobile. Still, if anyone can get it right it would be Apple. Especially as they'll now control nearly all of their system - which is what the article was trying to get at if you missed it.

sheep_herder
20/4/2017
09:55
Sheep_Herder - you've changed your tune, suddenly mobile Ray Tracing is all possible from Apple. The lady is for turning.
borromini1
20/4/2017
09:48
You're too funny borromini. The lady doth protest too much.

If senior Apple staff are linking that article, I'd take more from that than I would your ramblings.

sheep_herder
19/4/2017
17:21
The senior technical recruiter at Apple is sharing this on LinkedIn:
sheep_herder
19/4/2017
14:47
Equally - some will ensure their contract covers all angles, volume up lower rate, volume down higher rate - break of contract another rate ;-)
adventurous
19/4/2017
12:29
I don't agree with that article regarding royalty rate reductions. There's no way that Apple can "ratched down the royalty rate" unless IMG didn't stipulate that in the contract which would be akin to giving it all away for free.

Contracts tend to allow royalty rates to drop at certain shipment milestones (i.e. the more you sell the smaller the royalty) otherwise you are locked in until that contract expires which is normally end of life of the GPU.

sheep_herder
19/4/2017
10:06
Yes, but it would give a better opportunity for a takeover from another party, that is if anyone 'would' at all be interested.
orkney
Chat Pages: Latest  1648  1647  1646  1645  1644  1643  1642  1641  1640  1639  1638  1637  Older

Your Recent History

Delayed Upgrade Clock