Sam Altman recently revealed new details about GPT-5 in a fiery interview with Axios in Davos. He expressed surprise at the overwhelming popularity of Chat GPT and GPT-4, and how they have transformed businesses and integrate into daily lives. Altman also hinted at the potential name change for GPT-5. He expressed excitement for the exponential increase in the model’s intelligence and ability to solve complex problems more accurately.
Regarding AI’s future in 2024, Altman stressed that the emphasis will be on generalized intelligence continually increasing. He hinted at the possibility of AI being termed something other than GPT-5 and underscored the importance of AI getting smarter across the board.
Altman highlighted that advancements in AI will result from both powerful models and increased developer involvement. He deemed the productization of AI as critical, emphasizing that Open AI has evolved from a research company to one prioritizing research and product development simultaneously.
Furthermore, when asked about limitations in AI, Altman pointed to improvements in retrieval of real-time information and better use of specific data. He hinted at a shift in the way people use computers, alluding to a potential transformation in programming in the coming years. Altman’s insights reveal an exciting future for AI, poised for transformative growth and increased integration into daily life.
Watch the video by Matthew Berman
Video Transcript
Sam Alman was just interviewed by axios in Davos and he gave some major updates about GPT 5 and answered some tough questions about a range of different topics we’re going to watch the video and I’m going to break it down take a look look when when we decided to launch
Chat gbt we thought it was going to do well but we had no idea how well it was going to do and we thought the models just we knew they were going to be great and eventually but we didn’t think they were good enough to to resonate like
Chat GPT and gp4 have um and so I think one piece of knowledge is that this technology even with all of its current limitations is far more useful than we thought and can integrate into our lives in in a much more valuable way than we
Thought and so now that we know that as we think about launching the next much better models um we come with a different perspective I learned something about uh important and Urgent problems and not letting the important but not urgent ones f I learned something about board
Members I learned a lot of things okay so even Sam Alman was surprised by how incredibly popular chat GPT became and how quickly it became that popular and not only that how useful it is in people’s everyday lives I’ve seen entire businesses be transformed by chat gbt
And AI in general already I use AI in a bunch of different areas in my life already including obviously for work I’m extremely bullish and this is probably the least interesting question from this entire interview let’s let’s go to the next clip what can we expect AI to do
This year that it couldn’t do last year there are all these things that we can happy and I’d love to talk about sort of all the specifics but the the general principle I think the thing that matters most is just that it gets smarter so gpt2 couldn’t do very much
Gpt3 could do more GPT 4 could do a lot more GPT 5 will be able to do a lot lot more and and the thing that or whatever we call it and the thing that matters most is not that it can you know have this new modality or it can solve this
New problem it is the generalized intelligence keeps increasing and we find new ways to put that into a product we find new ways to use it but that’s that is the high order bit I think that dominates everything else in the importance is that the overall capability of the model its overall
Intelligence its ability to do longer more complex problems more accurately more of them that is increasing across the board and that to me is one of the few things that make this totally different from any previous kind of technology okay so in this clip he was
Asked what we can expect from AI in 2024 and he didn’t really say too much here other than it’s going to get much much smarter across the board but I think we all understood and knew that I think one little tidbit that slipped out is he
Might not call it GPT 5 he said or whatever we call it so that’s very minor but I just wanted to point that out let’s watch the next clip and let’s talk kind of this year what we can expect do you think most of the gains will come
From new more powerful models or the fact that we now have so many more people developing on the models for this year that’s a great question um I mean obviously it’ll be the multiplicative factor of both but if history is a guide for us it is the more advanced model
That is the most important step forward so I would I I I think I would expect that to be the biggest gain again but as it does get integrated in people’s workflows in all these new and different ways um the productization is critical like we we started off with as a research
Company now we understand that we have to treat both research and product as both critical and it’s the it’s the fact that we can do both of those things that I think makes us a special company I thought this was a fantastic question and Sam Alman said not only are the
Gains going to come from improvements and models but they’re also going to come from this huge influx of new developers building on top of AI and all of these new AI tools that allow developers to build incredible products and I could not agree more with that now
He did ultimately say that he believes G gains in model performance are going to be the biggest gains overall but I’m not so sure we’re not going to have AGI this year at least that’s what I think and so a lot of the gains are going to be based
On these incredible projects that are being built on top of artificial intelligence and I cover a lot of those on my channel so if you’re interested in those definitely subscribe if you’re not already and the coolest part is getting this new technology into all the developers hands to allow them to build
These incredible projects but of course he has Insider information so he might know something that is coming GPT 5 in 2024 that might be this tremendous gain in AI that nobody was expecting and he’ll touch a little bit more on that later in this video what are some of the
Limitations that you think this will be the year we overcome these not just you Sam Alman and open AI but kind of the industry what are some of the things that are on the cusp of being solved whether it’s hallucinations because of better grounding whether it’s merging AI
With company data or specific data sets so that’s a good one I think access to specific data and the ability to use specific data in a much better way for like more relevant more like context to where it work I think that’ll get much better this year I think there’s all sorts of
The current stuff that people complain about like the voice is too slow and you know it’s not real time and that’ll get better this year um I I expect okay so this clip continues but I’m going to wait a second and I want to talk about
What he said already now he is talking about being able to have large language models be much better at retrieving information at inference time and that is super important something that Chad GPT only recently has gotten pretty good at with Elon Musk X and Gro that is what
It was built to do get real-time information from the incredible amount of data that X generates every single day and they can get it in real time and that’s incredibly important real time information cannot be built into the base model because at training time that’s essentially when that knowledge
Is cut off but with retrieval augmented generation and just regular web scraping the ability to give large language models real-time information is incredibly powerful and we’re pretty much there but he goes on to talk about something I truly believe in now you already know I’ve said that programming
Is going to be extremely different in the coming years and in 10 years we probably won’t need programmers at all now he starts to hint at why that might be he doesn’t say it directly but he talks about a shift in the way that people use computers that probably would
Lead to that outcome let’s watch I I think where we’re headed and then I’ll talk about this year is we’re headed towards um the way you use a computers to talk to it the operating system of of a computer in some sense is is close to this idea that you’re like working
Inside of a chat experience or an AI experience and you know you get to your computer and rather than go open a browser and type in Gmail and look through your emails or whatever you might just say like what were my most important emails today can you respond
To all of those I’ll see these you know go find this thing there and send it there so it’s it is we’re heading towards this new way to kind of do knowledge work um like this is I think with every great technological Revolution we do get an opportunity to
Use a computer in a new way and we won’t get all the way there this year but I do think we’ll see people do more more and more of their workflow inside of a language model for lack of a better I was going to say okay so you remember
The rabbit device that was just announced a week or so ago I pre-ordered one I can’t wait to get it and of course I’m going to make videos about it but that is truly what I believe is the future of how humans will operate computers it is as simple as natural
Language direct to compute and that’s what he’s talking about he is talking about no more graphical user interfaces where where you have to click type everything you just talk to Ai and it tells you exactly what you’re looking for and that is so fascinating and in that environment that’s when I start to
Think where do programmers live if I just talk to a large language model who can write and execute code to achieve the task that I just gave it why do we need programmers and this brings me no joy to say because I love programming but it’s hard for me to see a future
Without that and I made a whole video about it I’ll link it in the description below and if you disagree please tell me in the comments and of course if you disagree let me know as well do you think people will be spending hours a day within some sort of AI
Assistant well well some classes of jobs already do like programmers many programers already do that and I think that’s a reasonable uh Advanced look at what may happen to more people so yes I don’t know exactly what the format it’ll be I don’t know if it’ll like I don’t
Know quite what this new computer of the future looks like or this new like AI operating system looks like but but by the end of the year I would I would certainly say it’s a safe bet that more people are doing their work in one of these experiences and it’s not just this
Thing you launch a few times a day when you have a question or need your buzzword speech written yeah I could not agree more as these models get better as more developers build incredible projects on top of them they’re going to get so capable that our operating
Systems are going to start to transition to this natural language interface as the default it’s going to be very early this year but I do believe in the next few years the way we use computers is going to change fundamentally and what won’t we see this year what are some of
The things that AI will do down the road but you know you know beyond sort of you know the robots taking over science fiction stuff what are some of the things that absolutely will come that we’ll want but are more than a year away the the single thing that I think is
Most important to me about what AI will do for us is help vastly accelerate the rate of scientific discovery um make make new scientific discoveries uh increasingly autonomously I don’t think that’s a this year thing but I think when that happens it’s a it’s a huge
Deal all right so in that previous clip Sam Alman talks about the rate of scientific discovery now there’s been a number of papers and articles lately talking about large language models ability to discover new research mathematics science and a lot of them say at least today it can’t be done
However as these models continue to get better and with the addition of synthetic data allowing orders of magnitude more data to go into the models to be trained a lot of people do believe that large language models are going to make scientific discoveries and Sam Alman is obviously one of those
People now how incredible would it be if large language models running 24 hours a day were tasked with finding a cure for cancer or other diseases other ailments other genetic defects the consequences could be absolutely incredible and this is just one of the reasons why I’m so optimistic about our future especially
As it relates to artificial intelligence but now eedi freed from axios is going to start asking some hard questions let’s watch what you’ve done to date of training on the open web is fine why do you even need to sign these kinds of deals to license content what we’re
Interested in is not data for training which we could talk about later uh there there’s you know there’s some value that but what we really want is when people are using chat GPT and ask what happened today at Davos we’d like to be able to display trusted branded highquality
Content at inference time and that’s been the focus of these deals not training all right I’m going to pause for a second this part of the conversation continues but this is really important he is setting up the argument that the data that he is getting from the New York Times and
Other sources but really the New York Times because they’re the ones that sued him is not really that important for what they’re doing going forward the real importance of having these up-to-date articles and other things that may be copyrighted is the fact that you can get real-time information from
Reputable sources and now Ena goes on to press Sam further about this topic let’s look okay but when it comes to train I mean do you still believe that it’s reasonable to train on anything that’s publicly available um I wish I had like a easy yes or no
Answer answer for you on that I think it depends there’s many cases where we think it is there’s many where we don’t think it’s the right thing to do legal or not like for example we respect opt outs in the spirit of being a good neighbor even when it’s clearly totally
Legal for us to do it and you know I think that’s like a good thing to do now one of the tricks to all this by the way is let’s say you the New York Times or somebody else um we can respect an opt out to not go like look at you know
Train on your content but New York Times content has been copied and inappropriately or not not attributed all over the web and so when we’re reading the web there will be New York Times articles not identified as such on random articles that say please on random sites that say please turn on my
Data so to be able to like ensure that we don’t uh that we’re sort of like a good a good neighbor one of the things that we think is important to happen is no matter what we train on uh and we again we’ll try to
Respect it as much as we can but the internet is a weird place no matter what we train on we don’t want to regurgitate someone’s copyrighted content that feels clearly not good to us okay I want to pause it there for a second so obviously
This is all about the New York Times and the New York Times suing open AI now what he’s saying is they allow opt out from specific websites and that’s fine but he’s also saying that all over the internet public data people have already copied and not attributed New York Times
Articles everywhere and it is really difficult to filter for those unattributed pieces of content because they have have proliferated all over the Internet already and this is all stuff that open AI has already published as their counterargument to the New York Times lawsuit and of course they don’t
Want to regurgitate an article word for word because that is copyright infringement and also it doesn’t really add that much value so I do believe that Sam mman is trying to do the right thing although his assessment on whether he should be able to train on copyrighted
Material legally I think is is wrong so we’re working to drive that as close to zero as we can and we have made huge amounts of progress with new technology there and that has to do with what gets surfaced in terms of how you train models and the courts are going to
Settle a bunch of these I mean we’ve had you guys are facing lawsuits others are are you prepared to if you have to build a model that’s trained only on stuff that’s clearly public domain or licensed yeah again I think everyone is convinced like my data is so great we cannot
Imagine you can make an AI without the New York Times St training data now we’re happy to include New York Times training data dat if the New York Times would like and there are many other partners who want us to but we don’t need to and in fact as these models get
Smarter and better at reasoning um we need less training data if you just think about your own experience like probably no one in this room has read 2,000 biology textbooks the 2000 and1st if you had would not it’s not going to help you that much all right that is a
Spicy answer he basically said if you’re a Content owner and you think we need you we don’t you need us what an answer and he says as models continue to get better at reasoning and logic they won’t need as much data to train on now I’m not so sure about that he definitely
Knows better than I do but I think at the same time synthetic data is going to be a huge Boon for open Ai and artificial intelligence in general and of course that isn’t copywritten at all let’s keep watching what you want is a small amount amount of super high
Quality data and to think really hard so huge amounts of data are going to continue to be important to us there’s lots of good ways we can get that from clearly like open domain data and then increasingly I think the models are going to think harder about a smaller
Amount of known high quality data and if people don’t want us to train on their data no problem okay he definitely contradicted himself there he on the one hand at the beginning of that clip said yeah we’re going to continue to get more and more data and then a second later he
Said well our models are going to need less and less data so that’s definitely a contradiction and I’m going to guess it’s actually the former they’re going to need more and more data but the data that they get won’t be copywritten data because it’ll be synthetic data created
By the models themselves do you have a model in the labs that’s trained only on stuff that you know you have a right to um I appreciate for sure the effort to get me to talk about a model in the labs but we never do that okay love that
Answer Sam Alman is Media trained to say the least Ena tried to get Sam Alman to talk about a model that they are cooking up in their lab and of course Sam Alman gave the hard not a chance you’re getting me to talk about that we’ll move on to another easy topic
Democracy um open AI announced a variety of efforts just in this past week about securing democracy working with other groups to help secure democracy your own efforts with so many elections around the world this year including of course the US presidential election how confident are you that open AI techn
Ology won’t be used by either campaigns or outside aers to influence the election all right before I show you Sam alman’s reply to this the article that Ena is referencing is a blog post that open aai just put out and it’s about how they’re handling election security let
Me show you that real quick so this is the blog post it was put out just a couple days ago Democratic inputs to AI grant program lessons learn and implementation plans it says we funded 10 teams from around the world to design ideas and tools to colle collectively
Govern AI immediately I think wait isn’t that open ai’s job why are they Outsourcing that that seems like an incredibly important thing to manage internally yet they’re Outsourcing it to other teams that’s weird to me now they do give a lot of information in this blog post about how they’re going to be
Handling election security and Democratic Security in general if you want to see me make a full video about this blog post let me know in the comments below it’s one of our top worries for the year so we put this stuff out we want to get way ahead of it
We know this is going to be important important I think in any election people are tempted to fight the last war and put their effort into what happened last time and you know try to mitigate that my my expectation although it may be wrong is that this time around there’s
Going to be new challenges and we’ve got to really be prepared and addressing those and we don’t yet know what all of them are we have some ideas as you mentioned we published a lot of stuff yesterday but we really just want to have a very tight feedback loop careful
Monitoring be willing to like make changes quickly if we notice anything and work with the broad ecosystem of Partners uh to do the best that we can but I’m nervous about this and I think it’s good that we’re nervous about this so for a long time Sam mman has said
That election security and using artificial intelligence to influence politics to create deep fakes these are all things that he’s been worried about he did an interview with Lex Friedman where he said the same thing that was over a year ago and I’m sure he’s been thinking about this problem for a long
Time he hasn’t really shared much other then we’re going to try really hard uh which you know that’s I guess enough for now but I’m definitely going to dive deeper into that article that I just showed you to learn more about what they’re thinking Ena goes on to press
Sam let’s see what Sam says still I mean there’s only a handful of people whose job at open AI is really dedicated to election stuff meta Tik Tok they have hundreds of people they probably have more people than you have in your whole company I don’t know the number but we
Have a lot like we are taking this quite seriously but you only have 500 employees you can’t 800 I don’t know 700 we have I don’t know the exact number but more than 500 but but you can’t have hundreds of people just on elections right look there are a lot of companies
That have teams of hundreds of people that do less than teams of like four people do at a competently run company all right I love that answer and the number of people working on a problem is not the right metric by any means especially when you’re talking about artificial intelligence because AI gives
People superhuman power superhuman productivity and the best researchers working their butts off on a really hard problem a few of them could probably do more than a larger company and I think a perfect example of that is looking at open AI look how much they’ve been able
To do with just a few hundred employees whereas Google has tens of thousands of employees and their AI models are nowhere near GPT 4 level so I think that was a very very accurate answer by Sam Alman and he is the right person person to give that answer because not only is
He doing incredible things at open AI with a very few amount of people he also invests in startups throughout the world with a handful of people that go on to disrupt entire Industries so I absolutely love that answer you also announced a change in policy that allows
Open AI models to be used by militaries uh why did you make that change because what we tried to do is say here’s what you can’t use the models for rather than here’s this group of people that can’t use it um there were parts of the Department of Defense that had I think
Super legitimate super good use cases for our models and a blanket saying you know the anyone whose address ends in do Mill can’t use this I think is bad uh I’m like very proud citizen of the US I’m a huge supporter of liberal democracy continuing to do well we don’t
Want our models being used to like make kill decisions of course but there’s a lot of other stuff that the military does that’s quite important okay so I like that answer a lot as well he’s basically saying saying the military should be able to use AI there’s going
To be lines in the sand of what they can and can’t use AI for but just saying a military cannot use AI because they are the military is the wrong way to think about it and he goes on to double down with support for the US government the
US military which I also appreciate being from the US you’ve got some great use cases that you talked about helping you know prevent suicide by veterans obviously if someone giving a speech at West Point wants to translate it into Swedish that’s probably not a high-risk
Use then you have stuff at the other end you know developing a nuclear weapon clearly against your policies but there’s so much in the middle there’s so much that is not building a bomb it’s not destroying property which is another one you have that’s allowed that’s not
Allowed rather but that could be really harmful I mean these AI engines are incredibly you know capable persuasion engines how do you draw the line of what a military can and can’t do and do you think that’s the right place one of the things that we believe very deeply is
That society and this technology have to co-evolve um it it we believe in iterative deployment a lot for the obvious reason which is that people need time to gradually update and think and figure out what the rules should be but there’s another part which is it’s it that you can’t separate the
Technology from the world you can’t just even if you get everything magically right you can’t build it in secret and then put it in the world at once so I agree that getting things out early and often is generally good it gives people enough time to adapt to test these
Things and to make sure that there are no glaring holes or weaknesses or ways to abuse systems but AI is a little different and he has said this for a while that he wants to get his AI out early and often now one of the big
Issues with that is if he gets it out there ends up being a huge problem whether it’s the ability to abuse the large language model or any other issue or deep fakes or election interference these can be irreversibly damaging events so I get a little nervous when he
Talks about trying to get things out so quickly and wanting to iterate and looking back at meta’s internal motto for many years it was move fast and break things but even meta decided to kind of remove that quietly because at a certain scale and when your product has
Such a big impact breaking things is extremely negatively impactful because the world is going to keep changing with each iteration which means on those middle cases I don’t know what the right answer is yet nor does anyone because no one has really like thought through and seen
How the the institutions and the world and Society shift and reshape in in response to this so again he is saying he doesn’t know he doesn’t know these middle cases it’s going to be obvious on both sides of the spectrum but in the Middle where it’s not as clear he’s
Saying we’ll figure it out along the way not my favorite answer but I also don’t know a better answer how else are we going to test these new technologies other than getting them out into the world and seeing what happens but of course there’s the flip side to that
Which I already discussed there will be a lot of things that we’ll have to start slowly on and iterate as we go and there will be a lot of middle cases but we do want to support the US government and other governments too and like I find it odd that everyone thinks
This is a big gotcha question where they’re like going to say wait are you saying you support the US government and my answer is just yes now this next part of the interview I find extremely fascinating let’s take a look do you think we’ll be able to have one GPT add
Your number to it that really satisfies that or when you look at a country like one of the countries in the Middle East are they going to be comfortable with the kind of system you’ve built okay so just to clarify what Ena is asking they’re saying basically the same way
That Google’s no longer operating in China meta is no longer operating in China because China has censorship laws and the US has different censorship laws and different governments different countries different people throughout the world have different morals have different values and should a large language model change depending on which
Country it’s being operated from so let’s see what Sam has to say I think there will have to be some Global standards um and and one thing I’m excited about is the technology itself I think can help us do that you can imagine GPT and talking to all of its
Users to really understand their value preferences relative trade-offs the Nuance of that and then doing something that actually represents all of the people that want to use this this model um there will be a question of defaults there will be a question of what the wide bounds of uh permissions possibilities are like
What what can you get a system to say what can it never say how much room do you have to customize it for yourself and we’ll see this at many levels so there will be versions by countries different cultures um individual people and I think the answer there is going to
Be to allow quite a lot of individual customization and that’s going to make a lot of people uncomfortable yeah so he’s basically saying we’re going to put GPT out and depending on where you are who you are what values you hold what morals you have we might change it specifically
For you and in fact the large language model will probably just learn what you think and reflect that back to you now in my mind that has the likely unintended consequence of increasing the echo chamber that is the internet so reinforcing the beliefs that you already
Have now meta has been dealing with this and other social media companies have been dealing with this since the beginning of social media I don’t love his answer because we should have diversity of thought diversity of opinion and if a large language model is essentially reflecting back what you
Already believe that is going to have the opposite effect and does that mean open AI is comfortable to go into you know I don’t want to pick on an individual country because my hope is that they all expand uh their human rights but there’s a lot of countries
That don’t for example respect lgbtq rights if country X said in order to operate in our country you have to change the way uh open AI models answer questions about that is is that something you’re willing to do um I mean if if if if like again in this idea of
Like a broad what the absolute constraints are if the country said you know all gay people should be killed on site then no we would say this is like I hope anything that the the world the people of the world would come to as as a principle would say that is well out
Of bounds um but there are probably other things that I don’t personally agree with that a different culture might about gay people uh that you know the model should still still be able to say and I and I think like it is not our we have to be somewhat uncomfortable
Is as a tool Builder here with some of the uses of our tools uh and not and and and again there there are things where I think we can just say we draw a line here absolutely so I’m going to guess and I don’t know this for sure
But my guess is that Sam Alman has very similar ideas to Elon Musk about free speech in that there are a set of laws we have to live by and companies have to obey and anything beyond that is considered free speech and so we’re going to be uncomfortable because we
Don’t agree with something another country has to say or how they’re using our GPT but as long as it’s legal it’s fine that’s my guess as to what Sam Alman is actually saying here and this part of the conversation continues here but if I’m hearing right you do foresee
A world in which open AI models might answer a question differently in different countries based on their values I would say more than that that it’ll be for different users with different values um which you know the country’s issue I think is somewhat less important but but a lot of countries
Aren’t comfortable with the users choosing what they get to and it’s not just the US is the question here like will we respect other governments uh will you you know it’s it’s come at before I mean you know Google decided not to do business in China okay I’m going to pause for a
Second so Sam Alman basically said gpts are going to be customized down to the user which I actually like that idea but then Ena goes on to press him and say yeah but within a lot of countries that’s probably not even going to be allowed so it is going to be at the
Government level yeah and there are there are countries where we decide not to do business um but there okay and he gives a very clear answer there are going to be countries where we just cannot find a way to align with their morals and values enough to offer them
Our services lot of governments who I think have different beliefs than us we can still say say there’s enough Common Ground here we’re heading towards a lot of the same values let’s work together okay now they’re getting into what I is calling the lightning round and it’s
Just going to be rapid fire questions but we only get two or three the first one is spicy so let’s watch when you returned you said you were hopeful that you would find a good role for Ilia and that things would continue have you found that role no update yet so he’s is
He working at at open AI which’re like I don’t know what the exact status of that is I love I think Ilia is like an unbelievable person and researcher um it’s you know obviously it’s sort of a traumatic event to go through but I am I’m very hopeful I care about him a
Great deal okay so if you’re not aware of what’s going on when Sam Alman was very briefly fired from open AI the person who was a big leader in that firing was Ilia and they’ve been close friends IL is a co-founder of open AI an incredible researcher an incredible mind
In the world of AI and then when Sam Alman came back to open AI Ilia was obviously in a very awkward position and here Sam Alman obviously didn’t give any new information unfortunately and next we have another spicy question let’s watch whether or not it was one of the
Factors that led the board to do what it did one of the criticisms of you and there were a few criticisms of you during that time um was that you know there’s all this stuff that open AI is doing why are you off you know raising money for startups you know building
Other things you know why aren’t you spending 100% of your time on these big issues that are confronting open AI so one of the reasons the board gave for firing Sam Alman was because they believed he was doing things outside of open AI that might conflict with open AI
He is a prolific startup investor he has also been raising money for AI chip manufacturing he’s also been working with Johnny I reportedly to create an AI I device which I believe is outside of open AI so let’s hear what he has to say about this are you continuing to raise
Money for things that aren’t open Ai and invest they are open AI like I think there was this misrepresentation you know not hard to guess by who but that I was like off in the Middle East raising money for this chip effort and that it was somehow like
A Sam project it was like an open AI project that the board had decided was a clear strategic priority okay so right there he clarified one of the biggest stories to leak during his time of getting fired was that he was out in the Middle East raising money for building
Chips and he is saying no that was explicitly for open AI it it’s not a separate thing so you know I’m not going to go try to like fight everything in the Press because you all will do your thing and I respect that um but that was like wildly misreported these are like
These are critical to open AI efforts I don’t invest much in personal startups anymore like I don’t invest personally much in startups I do support the ones that I invest in um previous to this things like with the AI pin and Humane yeah uh so there’s well that one’s now
Gotten like I wouldn’t do another round there because that’s gotten way too close to open AI but in general if there’s like a startup that I supported and they’re raising more money um I try to like continue to support but uh like I’m not I don’t think of myself as an
Investor anymore I have like some ongoing obligations but like this is what I’m doing so those are the major questions and answers from this interview he goes on to talk a little bit about what seite executive might need to think about when considering how to use chat GPT and he basically says
Use it internally don’t try to build things on top of it as much as use it internally and really build productivity and efficiency within your company and then the last question was what are we not talking about and Sam Alman says well infrastructure Investments are absolutely critical and we’re not even
Close to what we need in the infrastructure game so if you’re an owner of Nvidia I would be extremely happy right now this is not investment advice but boy that company will continue to absolutely dominate the AI chip market for the foreseeable future the one exception is Apple and I believe
Apple is extremely well positioned to run AI directly on their M1 M2 M3 chips they’ve already proven to be amazing at running AI on these chips if you liked this video please consider giving a like And subscribe and I’ll see you in the next one
Video “Sam Altman Just Revealed NEW DETAILS About GPT-5 In Spicy 🌶️ Interview” was uploaded on 01/23/2024 to Youtube Channel Matthew Berman