Elon Musks Shocking New AI Prediction
Elon Musk recently had a Twitter space conversation with investor Kathy Wood where they discussed future developments in AI, including open source AI and the potential impact of GPT-4. Musk’s predictions hinted at potential shifts in the market leader status within six months, emphasizing the importance of staying informed in the AI space. He touched on the origins of OpenAI and the importance of open source AI for maximum accuracy in AI development.
In the conversation, Musk highlighted Tesla’s advanced AI capabilities, especially in self-driving technology, positioning the company as a leader in real-world AI applications. The discussion also delved into the role of Transformers and diffusion in AI advancements and the convergence towards intelligence in AI systems. The focus on the intersection of AI with fundamental physics and the potential for AI to invent new technologies highlighted the limitless possibilities in the field.
Overall, Musk’s insights on AI trends and the future of technology, combined with Wood’s questions, provided a thought-provoking exploration of the evolving landscape of artificial intelligence.
Watch the video by TheAIGRID
Video Transcript
El musk just had a recent Twitter space where he discussed with another investor called Kathy Wood any future developments coming in 20124 things like Gro AI open source Ai and just a lot of stuff that you do need to know if you are in the AI space I do think that it
Is a really interesting talk because he touches up on some of the things that people have been discussing on Twitter regarding open source AI he also does talk about the timelines for open source Ai and how in around six months things could change because GPT 4 might not be
The market leader we once know it to be so with that being said hopefully you enjoy this Twitter space audio I’ll leave a link to the full clip where he does actually talk about Tesla but this is a super cut version to only hyp specific focus on the bits where he’s
Talking about AI which is relevant for all of you so with that being said it basically said it seems that this is Gro talking seems that people are eager to hear about the latest developments in AI I mean in Tesla’s AI Technologies we definitely are such as FSD uh 12 and the
Next Generation platform for robotaxi they’re also interested in learning more about the company’s plans for the Optimus project additionally there is a lot of curiosity surrounding Elon musk’s thought on the AI space as well as his take on the future of Bitcoin I guess that was a little repetitive AI there
But these these topics are sure to spark some engaging discussions that’s that’s grock for you okay well grock that’s pretty good yeah pretty good so um Ai and and well and I do I do have more time than uh you know if you you would like to go long longer than 15 minutes
We can go longer than 15 minutes um I mean try to still my thoughts here on AI because I I could really talk four hours on AI you know we we both uh Charlie and Brett uh are are steeped in it so is is Frank um and of course Tasha
She’s done our work on autonomous but one of the one of the things we’ve been very curious about is op Source versus closed Ai and I I don’t know if you’ve SE I don’t know if you’ve seen the chart that uh Frank uh and Joseph did uh on
Our team but they they uh drew a line um uh they basically uh calculated performance improvements in both the closed uh closed or private AI uh companies and models large language models and then the open ones and it and open is behind Clos but the Steep the
The performance is um improving at a steeper rate than closed do you have any thoughts about that and about open versus closed generally when it comes to AI sure well I mean I think a lot of people know that you know I was instrumental in the creation of open Ai
And the in fact I came up with the name the opening open AI means open source yes and it’s started as an open source nonprofit um and the reason I wanted to create it was to create establish a counterweight to Google Deep Mind U because it was a unipolar world at that
Point it would you know uh Google deep mind had probably 2third or more of the um top AI talent and of course um vast Computing and Financial Resources um and um at the time I was close friends with Larry Page and it did not seem to me that he was taking AI
Safety seriously um so I felt there needed to be some uh like I said some some counterweight some competitor to um uh sort of the Google deep mind uh um and frankly instrument Al in in creating the company was recruiting ILO suyver um which was a a back and forth uh thing
Where actually ilot changed his mind a few times and and ultimately decided to come to openi um and uh so I won that recruiting battle um and and that is actually what caused Larry Paige to stop being friends with me frankly so um but now ironically because fate loves irony uh opening eye
Is super closed source and um and for maximum profit it should be renamed close for maximum profit AI um won’t be more accurate so because that’s that’s you know it’s literally the polar opposite of what of how it was started um I guess I’m I generally have a bias
In favor of Open Source um and you can see that certainly with the xplatform form where we’ve open source the algorithm uh and for example for Community notes we open source uh not just the algorithm but all the data as well so you can see exactly how a
Note was created it can be audited by Third parties there’s no there’s no mystery nothing hidden at all um I think that’s one of that seems to be one of the confusing things about open source as pertains to large language models is it’s it’s not as simple and I think this
Has been lost in some of the debate as just open sourcing the code it’s open sources the Shades of Gray here of course with parameter weights data there’s there’s all of that spectrum and I think that’s been maybe lost from some of the debate so it’s great to hear yeah
There’s there’s actually a very little code um very little actual traditional lines of software certainly at the at the inference level there there’s very little it’s a remarkably tiny number of lines of code um so it’s to these giant you know weights files and you know what your
Hyperparameter you know numbers are and um it’s basically a giant comma separated value file um so I always find it amusing to contemplate that perhaps our digital God will be a CSV file you know like and uh and then you make a new CSV file that has maybe more weights and the
Weights are better um and then you’re pretty much Del the old one so just the AI is evolving and deleting its prior self constantly um so so we’re going to worship a giant Excel file essentially that’s uh kind of I mean you need a very
Big yeah it’s a lot of cells but um it’s really just a bunch of numbers and and then those numbers are they’re getting smaller in magnitude too you know going from fp16 to FP fp32 to fp16 to in6 to in 8 and and now it looks like things
Are trending towards uh mostly in full um so they’re not particularly large numbers um yeah and back so yeah I’m not sure if I’m answering I’m probably answering the question uh but I mean I I I think it’s true that clo source is not that far in
Like from a Time standpoint not that far ahead of of Open Source um but let’s say and if you assume open source is you have six months because of the immense rate of improvement of AI that six months is a is actually a massive amount of time um
I mean or at least the delta in a an exponentially improving situation um six months will I think feel like an eternity um so I think uh closed will outperform open by a meaningful amount at any given point in time and given the yeah I was just gonna
Say given the questions we were talking about before about truth and not truth and do you think in 10 years time AI will have been a net Truth uh improver detractor from where we are today I think it’ll be well I speaking for at least for GR I think it will be a
Significant uh truth improver um that that is literally our goal is to be maximum truth seeking maximally curious um minimizing the the the error between P between perceived reality and described reality and actual reality um and always acknowledging the error not being too confident about it
Um so I and I think there will be a competition for truth and you will go people will uh tend towards the one that they think is most accurate so and and if there’s at least one AI that is uh aiming for maximum accuracy I think it
Pushes all of the AIS to aim for maximum accuracy um just as with um with the X platform FKA Twitter um as soon as as soon as you know X pushes for maximum truth and accuracy it it S Forces the hand of others they the others now also have to do
That right the truth the truth arms race among the csvs and is yeah is one of the one of our thesis I think with a is that Tesla is one of the most undervalued AI companies there is because of the insane amount of data you’ve got on all you
Know to feed into autonomous I don’t know if there’s any commentary on that because I think you know obviously with the rise of these llm companies which is by and large working off essentially available data that is not proprietary the benefits coming to the companies like Tesla that understand how to use
The AI benefits but do have these enormous proprietary data pools seems like that’s going to be the next Frontier yeah um I think that’s accurate an accurate description um Tesla is one of the leading AI companies in the world and with respect to real world AI it is obviously by far
In away the leader um so you know the word large language model with phrase wrong large language model LM is is massively overused um but what what I do see uh happening is somewhat of a convergence towards intelligence um you know it’s um Tesla like really to to make fullt
Driving work you kind of need baby AGI in the car um because you need to understand reality and reality is messy and complicated um and just as as a side effect the car AI has to be able to read for example has to be able to read read
Read arbitary sence as a little just a little side effect of understanding reality in in every language um so you know uh think everything is coming down to you know different layers of Transformers and diffusion and how you put together the Transformers and the diffusion
Um does the what i s of made that that somewhat Niche AI joke uh on the xplatform who do you think will be president in 2032 trans for or diffusion do you think that do you think that switching to full stack um AI for for Tesla’s full self-driving reduces the kind of
Forecast error for you in terms of when the problem will be solved as in you famously have you know said it’s a year way for a few years now uh do you think that like actually line of sight wherever it ends up you’ll have a better like line of sight on on when
You’re going to cross a particular threshold of performance where it can go without human intervention yeah I mean the car is already incredibly good at driving itself um so uh now especially if you say like okay drive in California what which is um you know both generally easy driving
Uh because you don’t have heavy rain and snow and that kind of thing in most parts of California um and TS the engineering is primarily in California so you’re going to get it’s going to overfit for solving California but if you’re just driving say around Palo Alto the probability
That you’ll need an intervention at this point is incredibly low um in fact even if you’re driving through San Francisco the probability of intervention at this point is very low so we’re really just uh going through a March of nines like how many nines of reliability do you need
Before somebody does not need to monitor the system you know it’s so interesting um GM basically shutting down Crews we it it’s it wouldn’t have been possible right but we didn’t know that is is that how you’re viewing it now uh well it’s certainly true that Transformers um are transformational yes so
Uh you know pretty much everyone’s using Transformers uh and it’s really just how many Transformers what kind of Transformers are you using autoaggressive Transformers which are very memory bandwidth intensive um you know there’s but you you I think you really I’m not sure you can really do AI without
Transformers in a meaningful way um diffusion is also very important um so I mean it just really looks like AGI is some combination of Transformers and diffusion um interesting would does diffusion help Elon in the things that Transformers are NE not necessarily that good at yet like planning and remembering rather than
Sort of a current state uh I think Transformers are important for all aspects of AI um yeah I mean computers are very good at remembering things I me your phone remembers the video that you took down to the last pixel um whereas you know most humans cannot remember who they met last
Week so memory it’s the easy I mean we’ve already outsourced memory to computers uh in in that like you say how how much of human knowledge is in digital form in versus in contained in human neurons it’s overwhelmingly in Silicon uh rather than biological neurons right so I there there this I
Mean I think it’s always you know it’s good to to think of um like try to consider one of the most fundamental ratios um this is like in physics you’re always think about try to looking at fundamental ratios um and some of the you know one of the fundamental ratios
Is the ratio of digital to biological compute um just you know just just in raw raw compute um and and how much and the ratio of digital memory to biological memory so you know at what point it let’s like I think at this point I think we’re probably over 99% of
All memory is digital as opposed to biological so you’ve got over I think over a 100 digital to biological ratio on memory at this point um trending towards thousand you know in many orders of magnitude beyond that um then um then you’ve got the digital to biological compute ratio and and
Since the number of humans is more or less constant I’m worried about it you know human is declining quite significantly in the years to come because the birth rate is so low um but you know so so human human computer is more or less it looks like a flatline
Versus time whereas the digital compute is uh exponential right um and um you know at some point it’s that that will also be above 100 meaning more than 99% of all comp will be digital instead of biological right and if we’re not there already we will be within a year or
Two although it is amazing how much data biology can store right I reckon I believe it’s it’s estimated that a gram of dried DNA can store between four and 500 exobytes so maybe there’s still still things to learn from biology but yes incredible the pace of change yeah I
Mean if you encode memories uh in DNA like a Tri tape then the DNA um memory potential is immense um but I’m not sure that’s actually what’s happening or if it is it’s I don’t know I’m not sure why my memory is so terrible but actually I mean your memory
In terms of the raw details like you have an amazing search function right I it’s kind of like yeah well for for any I mean I would argue kind of in some ways like the Transformer architecture enables um effective search of a a very very broad amount of data um and or
That’s a or or it’s like data compression it seems like the um yes we have there’s many more like databytes encoded or gigabytes encoded in disk but but actually being able to effectively search that space is is pretty still primitive with compute at least At Large Scale relative to human’s ability to
Access information no I mean there still some things that humans do better than computers um I mean if if you look say like say like computers have not yet discovered any fundamental physics but humans have discovered a lot of fundamental physics um the computers have not yet invented a
Useful technology um humans have invented many uh useful useful Technologies um so you know um yeah do you think that do you uh ascribe to the notion that like you essentially training training AI systems on language is not enough and that you have to have some kind of embodied
Source of data like through an Optimus robot um to actually get kind of the the raw learning in that a a AI system would need to understand fundamental physics yeah so that’s what I think is coming I think um I think what’s coming is is uh AI understanding fundamental
Physics um and and AI inventing new technologies it it definitely definitely feels like rather even though some things now apparently pass the touring test most importantly passing the money Penny test as can an AI actually pay bills do expense reports to the extent James Bond did those and actually pay parking fines and
Things like that that would actually be a very good use of AI that I think would have a lot of users immediately and and maybe the the uh Billy Connelly test of can an AI actually be consistently funny those feel like out gr is pretty funny that’s fair um I think you know
One of our goals for grock is to be the funniest AI so I mean if you ask gr to provide a vulgar roast it’s really good and speaking of grock to not to not let down our future AI overlords the final question on the gro uh summary
That that that Kathy touched on was the future of Bitcoin not sure if we want to touch on that or not or whether that’s out out of out of that that
Video “Elon Musks Shocking New AI Prediction” was uploaded on 12/23/2023 to Youtube Channel TheAIGRID