We could not find any results for:
Make sure your spelling is correct or try broadening your search.

Trending Now


It looks like you aren't logged in.
Click the button below to log in and view your recent history.

Hot Features

Registration Strip Icon for discussion Register to chat with like-minded investors on our interactive forums.

TSLA 1x Tsla

-0.30 (-0.08%)
20 Jun 2024 - Closed
Delayed by 15 minutes
Name Symbol Market Type
1x Tsla LSE:TSLA London Exchange Traded Fund
  Price Change % Change Price Bid Price Offer Price High Price Low Price Open Price Traded Last Trade
  -0.30 -0.08% 355.20 341.80 368.60 387.65 329.975 360.80 959 16:29:57

1x Tsla Discussion Threads

Showing 10751 to 10773 of 10975 messages
Chat Pages: 439  438  437  436  435  434  433  432  431  430  429  428  Older
hosede: The maximum size of the neural network in each car's FSD computer is the same, rather like the RAM in your computer. When Tesla is putting new training data into their neural networks for FSD they are just changing the numbers; the overall data size does not change. This is not like going for a new version of software when you'd expect the software to grow in size with all the extra features they have added.

The effect described in the article by Will Lockett according to Sam Altman is real. The relationship between the number of training cycles is not proportional to the quality of data. Eventually you end up with, say, doubling the training and getting only a 1% increase in the reliability of the data.

If you're trying to recognise whether a picture contains a cat or a dog with a 100% accuracy this is a huge problem. Does this mean that robotaxis will plateau and never be made to work? No. If you remember, the cars are using video data; there is a time element and movement involved. If the cars are putting through video frames every 20ms they get multiple attempts at interpreting the data at different angles.

There was a superb piece on BYD in the FT last week.

“I don’t think people realise BYD’s greater ambition is to be an energy ecosystem company,” says Bridget McCarthy, the head of China operations at Snow Bull Capital, a Shenzhen-based hedge fund invested in BYD.

Selling passenger vehicles is just the first step, she adds. “They’re trying to say: ‘we’ll electrify your fleets of commercial vehicles, we’ll give you the energy storage, we’ll give you solar so you can generate electricity’.”


Musk had it all, then lost his mind on Twitter. Wang Chuanfu is getting on with creating a clean energy behemoth.

simon gordon
The number of brands that sold at least one electric vehicle in China last year, according to Stephen Dyer, a Shanghai-based auto consultant at AlixPartners. China has more than 100 domestic brands that churn out more vehicles than the country’s drivers buy each year. Yet the government encourages unprofitable carmakers to keep producing as officials try to boost economic growth, preserve jobs and expand China’s role in the global EV industry. This overcapacity adds cars to a global market that risks becoming more oversupplied. (from WSJ)

Saw recent controversial football incidents on offside rulings and ball over the line events.
So thick is the computer, 'in line' at offside meant just that, the bulk of the mass of one player was in line with the bulk of the mass of the defender.

But it cannot handle that, it can only detect that a tiny finger or the slightest part of the attacking player is beyond all of the defender.

Same with ball over the line. If the whole mass of the ball is over the line not even touching the line the computer can only handle the projected ball to extreme edge of line.

The fun will start with robot taxis. A straight road from point a to point b ok, maybe, but anything else, no chance.

AI in robo taxis will not only need to achieve reliable image recognition, they will have to predict movements.

We get used to this great gift in tennis, football, even when driving. It is very subtle, and hard to believe that AI will ever be able to do it. Anticipation.

You know the sort of thing, that dog looks as though it os about to run onto the road...but wait a minute, it is moving towards the road but we can now see that it is tethered to a post.
How about nasty kids , pretending to move towards a road for fun, then stopping well short of the kirb.
Or a sign, residents only, closed to other traffic.

When does a bunch of rowdy thugs look menacing, instinct, and could AI sense it?
Love the way the magnificent Alsatian police dogs at the football matches had to be held back from attacking skin head yobs. How did they know and can it be programmed. They were not bothered about well behaved supporters.

Beating a grandmaster at chess, well that is simple, just remembering every past move in every situation.
It would be very simple to create a long list of things that it will not be able to do.

cfb2 - Personally I think that fusion needs the free work provided by the compression of space time of the mass off a star. Trying to do that with other methods requires vast amounts of energy such that it is difficult to make a surplus, especially given the maximum efficiency of heat cycles.
But doesn't more training mean bigger and bigger data banks - as Simon's article above said multiplying the data base by a hundred only improved effectiveness from 65 to 67% or something similar.
I presume all this data has to be in each car, as it cannot always be in touch with "Tesla home base"

Fusion requires new maths to keep the plasma contained (AI has shown to be a promising approach though...) and several breakthroughs so the energy out exceeds energy going in: Qtotal > 1 rather than Qplasma > 1 (Qplasma > 1 has been achieved). Problems are being solved and new approaches are being tried but I'm not aware of any in the public domain that will result in Qtotal > 1.

Tesla's FSD requires more training data for their neural networks, which takes time. Tesla is no longer compute constrained but they do need to filter incoming video data sent from cars before using it for training.

Remember that Tesla have sight of new releases of FSD many months before it goes to their employees or wider release. For them to be preparing the changes to make robotaxis I'm expecting the driving better than a human to be solved. In other words, the plateauing eluded to by the Will Lockett article occurs at a level sufficiently safe for Tesla to get approval to run the robotaxis service.

I'm not expecting the additional stuff to control fleets of robotaxis to be done yet, or by 8/8/24. It's unlikely the FSD software will be approved by then either.

I think FSD like energy from fusion is one of those things that's always "going to happen" but never does - like "free beer tomorrow"
On the scale at the bottom it says:
As of July 2023

So, before Musk demonstrated FSD V12 with end to end neural networks in a live stream (September 2023). I wonder whether Gartner would still think "5 to 10 years" now?

Spoiler alert: they would because they produce graphics to demonstrate their paying customers either lead the market or have made a wise decision. Have a read of the following article to explain why Gartner's research is worthless:

LinkedIn - December 2023:

Is the Large Language Model revolution just getting started or are we closer to the end?

by David Johnston

Principal Data Scientist at ThoughtWorks


I think there is a very good chance, as strange as it sounds, that LLM research has already made the major breakthrough, and that future progress will be incremental rather than revolutionary. AI however is a much bigger field and I expect continued research and progress on all sorts of things. The current paradigm, pattern recognition, will continue to be the leading methodology for perhaps another decade. After that, if there is much progress, I expect it to come from other areas of AI, some of which have fallen out of favor, or just briefly taken a backseat (e.g. reinforcement learning). Or perhaps we will finally figure out how to combine multiple paradigms into a larger system similar to how the human brain appears to work. But LLMs are probably approaching the point of being a solved problem and focus will shift away from making them more “intelligent” and focus more on making them more useful.

simon gordon

Status quo junkies?

What's wrong with asking questions?

The hype is high from some stock promoters.

Has there not been phases in the development of AI when it hit a roadblock and took more time to make the next leap? I think the writer is making this point.

simon gordon
The article suggests that we have reached a theoretical maximum beyond which progress is virtually impossible.


This makes the endlessly repeated mistake of saying ‘as things are today, progress can’t be made’.

The obvious flaw being ‘as things are today’.

Good luck status quo junkies, you’ll be wrong again (and again and again) soon enough..

simon gordon27 Apr '24 - 18:11 - 10697 of 10699

That article if correct suggests Robotaxis are dead in the water

They've all got it in for Tesla
Medium - 25/4/24:

AI Is Hitting A Hard Ceiling It Can’t Pass

Is it the end of AI’s rampant development?

by Will Lockett

There has been an insane amount of hype surrounding AI over the past few months. Supposedly, Teslas are going to entirely drive themselves in a year or two, AI will be smarter than humans next year, and an army of a billion AI-powered robots will replace human workers by 2040, and that is just the AI promises made by Elon Musk so far this year. The entire AI industry is awash with predictions and promises like this, and it feels like AI development is on an unstoppable exponential trajectory we humans simply can’t stop. However, that is far from the truth. You see, AI is starting to hit a development ceiling of diminishing returns, rendering these extravagant promises utterly hollow. Let me explain.

To understand this problem, we need to understand the basic principles of how AI works. Modern AIs use deep learning algorithms and artificial neural networks to find trends in data. They can then extrapolate from this data or generate new data along the same trend line. This starts by “training” the AI, where a massive amount of data is fed into it for analysis, enabling it to find these trends. After this, the AI can be queried for an output. This basic concept powers computer vision, self-driving cars, chatbots and generative AI. This is a somewhat reductive explanation, but it is all we need to understand for now.

Over the past few years, AIs have become significantly more capable. This has been partly due to better programming and algorithm development. But it is also 90% thanks to the fact that AIs have been trained on significantly larger datasets. This allows them to more accurately understand the trends in the data and, therefore, more accurately generate results. But there is a problem; we are seeing drastically diminishing returns in AI training, both in terms of data and computational power needed.

Let’s start with the data. Let’s say we built a simple computer vision AI designed to recognise dogs and cats, and we trained it using images and videos of 100 dogs and cats, and it can correctly identify them 60% of the time. If we doubled the number of training images and videos to 200, its recognition rate would improve, but only marginally to something like 65%. If we doubled the training images and videos again to 400, its improvement would be even more marginal, to something like 67.5%.

This is partly because when you have a smaller data set, each new training image gives you proportionally more new data to work with than adding a new training image to a larger dataset. However, it is also because AI can quickly make novel connections and trends in a small dataset, as it only has to find a trend that works with a few examples. But as that dataset grows, finding new and novel trends and connections that work for the entire dataset becomes harder and harder. These new trends and connections from larger datasets enable an AI to get better and more capable. As such, we are seeing the amount of training data required to improve an AI by a set amount increase dramatically as we reach a point of diminishing returns with AI training.

But there is another problem. AI training is incredibly computationally hungry. The AI has to compare each individual point of data to every other data point in the set to find these connections and trends. This means that for each bit of data you add to an AI training database, the amount of computational work it takes to train that AI on that database increases exponentially. As such, even if you can acquire the vast amount of data it takes to train these ever-improving AIs, the amount of physical computing power and energy it requires will eventually grow to the point of impossibility.

Sadly, there is evidence that we are at a stage where both the diminishing returns of training dataset growth and the exponential increase in computing power required to use said datasets are enforcing a hard ceiling on AI development.

Take OpenAI’s flagship AI ChatGPT4. Its improvement over ChatGPT3 was smaller than ChatGPT3’s improvement over ChatGPT2, and even though it was more accurate, it still had the same problems of hallucinating facts and lack of understanding as ChatGPT3 did. Now, OpenAI is very tight-lipped on how it develops its AIs, but experts have investigated and found that ChatGPT3 used a training dataset about 78 times larger than ChatGPT2, and ChatGPT4 uses a dataset 571 times larger than ChatGPT3! Yet, despite this considerable up tick in the training dataset size, ChatGPT4 still has significant flaws that significantly limit its use cases. For example, It can’t be trusted to write anything remotely fact-based, as it still makes up facts.

Some estimates put ChatGPT4’s raw training dataset at 45 TB of plaintext. This means that for the next iteration to be as big of an improvement as ChatGPT4 was over ChatGPT3, the training dataset would need to be tens of thousands of TBs. Acquiring and preparing that amount of plaintext data, even with OpenAI’s dubious methods, is simply impractical. However, actually using this dataset to train their AI could use so much energy that the cost renders the AI entirely unviable, even for a non-profit.

That isn’t hyperbole. OpenAI CEO Sam Altman has gone on record saying that an energy breakthrough, like nuclear fusion, is needed to make advanced AI viable. Sadly, even if we do unlock nuclear fusion, it isn’t likely to be cheaper than our current energy this century or possibly even next century. In fact, no form of energy is set to become significantly cheaper than anything we currently have. So, this proposed solution to AI’s energy problem is deeply misleading.

This viewpoint is supported by some very serious studies. One from the University of Massachusetts Amherst looked at the computation and energy costs associated with improving an image recognition AI performance to over 95% accuracy. They found that training such a model would cost $100 billion and produce as much carbon emissions as New York City does in a month. Bearing in mind that this is for an AI that still gets it catastrophically wrong 5% of the time. The study also highlighted that increasing accuracy to 99% would take exponentially more cost and carbon emissions.

This is why Tesla will never develop full self-driving cars with its current approach. Their Autopilot and FSD can only sense the world around them through this type of AI computer vision, and for FSD to become fully self-driving, its image recognition accuracy needs to be approaching 100% accuracy. As this study shows, it could take far more money than even Tesla has to get their AI that good.

In other words, unless the AI industry can find a way to be more efficient with its AI training and its computational load, it won’t be able to break past this limit, and AI development will completely stagnate. Now, possible solutions are on the horizon, such as far more efficient AI hardware incorporating analogue and quantum technologies and new AI architectures that requires significantly smaller training datasets. However, these concepts are still in their infancy and are potentially decades away from being used in the real world.

In short, be prepared for AI to massively fall short of expectations over the next few years.

simon gordon
Good pick Johnwise, but you won't give my 4wd Model 3 a race will you!
Trying to rig the market with tax penalties on petrol or incentives on electric will never be enough to persuade the majority of people to purchase EVs, they are expensive, no second hand value, inconvenient if drive reasonable miles and not environmentally friendly since require rare metals to build, can’t recycle the batteries and majority of charging is from carbon produced electricity.

My mode of transport, four wheel drive diesel, It's Fantastic

I think the notion that Tesla is going to make $2.64 this year is highly optimistic. 45 cents in Q1 and things look to be getting worse. A significant annual loss looks a distinct possibility

The AI/Robotics idea is what Musk is selling to the market. From the same article:

Tesla’s first-quarter earnings could have been an extreme disappointment to the bulls who just last week, let alone last year, were expecting a much better outcome. True to form, however, CEO Elon Musk used the conference call to talk about anything except their automotive business.

"I think Cathie Wood said it best. Like really, we should be thought of as an AI or robotics company. If you value Tesla as just like an auto company [then] fundamentally, it’s just the wrong framework and if you ask the wrong question, then the right answer is impossible."


"The way to think of Tesla is almost entirely in terms of solving autonomy and being able to turn on that autonomy for a gigantic fleet. And I think it might be the biggest asset value appreciation in history when that day happens, when you can do unsupervised full self-driving."

Who wants to miss out on the biggest asset value appreciation in history? Not the Tesla Illuminati, who responded jubilantly. At one point on Wednesday morning the stock was up 16 per cent, or more than $80bn in market cap. That’s more than a whole Diageo, or Richemont, or Glencore. It’s also more than an entire Volkswagen, or Stellantis, or Ferrari.

simon gordon
Four Candles

All that tax payers money spent on the Torys New Green Wind Mill Deal, To make energy unaffordable..

Wind producing 1.5 percent

Live generation data from the Great Britain electricity grid

Musk is going independent with AI. Irrespective, the short thesis on Tesla is entirely based on revenue, margin, and prospective return multiples. That's it, the end.

Careful - I'm afraid you don't understand black body radiation and the Stafan-Boltzmann law. If the outer reaches of the atmosphere is insulated from the earth it cools, and if it cools it radiates less heat. The radiation the earth is hit with by the sun remains the same, so the earth's lower atmosphere heats up. The suns rays are much higher frequency than those of the re-emitted radiation, before you ask, again physics.

Chat Pages: 439  438  437  436  435  434  433  432  431  430  429  428  Older

Your Recent History

Delayed Upgrade Clock