20 Major AI News: New AI Agents, Quantum Computing Upgrades, AI Upscale, and Text To Video Technology – Video

20 Major AI News: New AI Agents, Quantum Computing Upgrades, AI Upscale, and Text To Video Technology – Video

New AI Agents, Quantum Computing, Upgrades, AI Upscale, Text To Video (Major AI NEWS#20)

This week in AI has brought some groundbreaking developments, from the launch of IBM’s next-generation quantum computer to the release of new AI agents like Gro and Pika 1.0. Google also introduced Gemini, a graph neural network for material exploration. Additionally, AI upscaling technology is making waves, allowing for the enhancement of images and videos with incredible detail and clarity.

The AI landscape is rapidly evolving, with advancements in quantum computing, AI agents, text-to-video technology, and more. It’s clear that AI is poised to revolutionize various industries, from healthcare to gaming. Stay tuned for more exciting updates in the world of AI as technology continues to push boundaries and create new possibilities. If you want to stay informed about the latest AI news, be sure to subscribe to our newsletter for weekly updates.

Watch the video by TheAIGRID

Video Transcript

From AI agents to Google Gemini this week has been absolutely outstanding in terms of what we’ve managed to get in this week in AI so let’s take a look at some of the things that you did Miss because trust me when I tell you there has been so much smaller development in

The world of AI that you’re going to want to know about that have just simply been overshadowed by the larger announcements and I can’t imagine that next week isn’t going to be any smaller so with that being said let’s get into one of the first things that you did

Miss in terms of the AI world so so recently IBM actually launched their next Generation quantum computer so it says we’ve entered a new era of quantum Computing and if you don’t know Quantum Computing will largely Advance artificial intelligence but take a look at this trailer so this video is called

Unveiling IBM Quantum system 2 so IBM Quantum system 2 is the next Generation Quantum processor and Computing system recently announced by IBM it’s the world’s first modular utility scale quantum computer it’s designed to tackle complex problems that are beyond the reach of today’s classical computers the

Quantum system 2 stands 15 ft tall and operates in a near perfect vacuum at temperatures colder than deep space it’s initially powered by three 133 Cubit Heron processors which are IBM’s most performant Quantum processes to date offering up to fivefold Improvement in error reduction over the previous IBM quantum

The system is fully upgradeable and is part of IBM’s road map for the next 10 years which prioritizes improvements in Gate operations to scale with quality towards Advanced error coerced systems the system is fully upgradeable and is part of IBM’s road map for the next 10 years which prioritizes improvements in

Gate operations to scale with the quality towards Advanced error corrected systems by the end of 20124 each of the three Heron processors in Quantum system 2 will be able to process a remarkable 50,000 operations in a single Quantum circuit the modular design of the system allows multiple Quantum system twos to

Connect together to create systems capable of running a 100 million operations in a single Quantum circuit IBM plans to realize a system capable of running a billion operations in a single Quantum Circuit by 2033 so IBM’s new Quantum road map extends beyond the hardware detailing the software and enabling Hardware technology needed to

Deliver Quantum Advantage when a Quantum system can solve problems that traditional 1 and zero computers can simply not solve in any amount of time so essentially this system is designed to bring Quantum Centric Computing to reality I will play a small trailer from the actual voiceovers as long as it doesn’t get Copyrighted introducing the IBM Quantum system 2 the world’s first modular utility scale quantum computer system Quantum system 2 was designed to tackle complex problems that lie far beyond the reach of today’s classical supercomputers it stands 15 ft tall and operates in a near perfect vacuum at temperatures colder than deep space

Initially powered by three 133 Cubit Heron processors Quantum system 2 is fully upgradeable to the growing line of utility scale qpu that IBM will be releasing over the next 5 years this is the world’s first modular utility scale Quantum system so in addition to talking about physical cubits we now need to be

Concerned with circuit size by the end of 2024 each of the three Heron processors in Quantum system 2 will be able to process a remarkable 5,000 operations in a single Quantum circuit but the real Triumph of quantum system 2 is its modular design design our new Quantum coupling technology will

Allow multiple Quantum system twos to connect together to create systems capable of running 100 million operations in a single Quantum circuit continuing down this path we plan to realize a system capable of running 1 billion operations in a single Quantum Circuit by 2033 that’s why we call Quantum system 2

The building block of quantum Centric supercomputing today our clients and partners are already using our 100 plus Cubit systems to advance science surpassing Brute Force methods deployed on the world’s most powerful classical supercomputers and soon they expect Quantum applications offering unprecedented business value our mission is to bring useful

Quantum Computing to the world and it starts with and yeah Quantum Computing just got a huge upgrade and of course we did get the fact that Gro AI the beta is now rolled out to all premium X subscribers in the United States Elon Musk said that there will be many issues

At first but expect rapid improvements almost every day and your feedback is much appreciated it will expand to all English language users in about a week or so and Japanese is the next priority it’s the second biggest user base and then hopefully all languages by early 2024 so with Gro being released it’s

Very interesting because this is one of the only large language models that is embedded straight into a social media platform which means that it’s going to be getting real time data on itself which is crazy like that’s something that we haven’t really thought about before but some people like it some

People don’t like it I think what’s going to be fascinating is how Gro does evolve as we know right now it is something that is plugged into Twitter but how will it evolve in terms of its use and functionality when other companies large language models are at

Such a fart level what angle are they going to take are they just going to keep this as the funny SL informative chatbot or is there going to be more of a perplexity style approach where you can use it to search realtime events I think that that would be really really

Fascinating I actually also did cover this earlier this week but you might have not seen this so this was actually genome and genome was pretty insane guys this was something that was pretty crazy so essentially genome actually stands for graph networks for material exploration it’s a state-of-the AR graph

Neuron Network developed by Google Deep Mind and it uses deep learning to predict the structure of new inorganic Crystal substances which are fundamental to the digital economy so the gnome model or gomi model takes the form of a graph inputs converted to a graph through a one hot embedding and it

Predicts the total energy of a crystal which is a crucial factor in determining the stable nature of a material so the model has been trained to provide accurate characterizations of atomic interactions even Beyond its training distribution and it’s made significant contributions to the field of Material Science by predicting the structure of

2.2 new inorganic Crystal substances this number is 45 times larger than the number of substances discovered in history of Science of these 380,000 structures have the best chance of successfully being made in the lab and it has been likened to alphafold another deep minded AI system that predicts the

Structure of proteins with high accuracy now what’s also cool is that we did get to see a lab at facility in Berkeley lab where an AI robot was actually making some of these materials it says using materials from the materials projects and insights on stability from genome the autonomous lab created new recipes

For Crystal structures and successfully synthesized more than 41 new materials opening up new possibilities for AI driven material synthesis now if you didn’t see this one it’s probably because you aren’t on Twitter and you aren’t looking at exactly what’s going on in AI but you can see that this does

Have 19 million views on Twitter so if you were on Twitter in the AI space you likely did see this what we have here is PE collabs and and if you didn’t know we covered P labs around 5 months ago when they initially launched and had their

You know standard text video but this is Pika 1.0 and this is essentially their upto-date model their latest model their latest iteration and their first generation in terms of what they deem to be high quality and my oh my an entire video is coming on this because I don’t

Know if you’ve seen the trailer I’m going to full screen it right now but this is Pico 1.0 and the consistency the fertility the quality of this model is just simply outstanding because it’s defitely surprising that the company was able to take on Runway and do it with

Such quality it seems like what happened before when we had mid Journey overtake what we had with DAR so as you know competition is good and having this text to video breakthrough is something that I didn’t think would happen as early on as it did now there are still some

Benchmarks that we do need to surpass in terms of the quality the length and other stuff like that but I will say that one thing that Pika does get very very well is is the sense of style so you can see here that you also do have

This secret one where you can add different things in certain videos but I think their standout point is that their certain Styles like their animated videos are absolutely incredible and in terms of the anime ones as well those ones are so so good so I’m going to show

You some of those examples now so this is one of the ones of a bunny rabbit in nature you can see that this is pretty consistent in terms of the character and the movement I thought that this one was super super cool because for some reason

Pabs I think they trained on a huge amount of Disney animations because I’ve seen loads of those and they look really really good so there’s this one there was also this AI animated video which uh looks pretty decent in terms of what the consistency is and this one isn’t one of

My favorites but it’s still pretty cool because it showcases what you can do with the more realistic side of the stuff now of course AI video still does have that small flickering issue but what you’re not hearing is the sound design for this one um because as always

Sound is sometimes an issue on YouTube but I will leave a link to these so that you can actually watch them over on x.com or twitter.com but yeah the examples of this are really really fascinating for example this is the anime version and this was by far the

One that I found to be absolutely incredible so all of what you’re seeing right now is generated with AI text video so I think this just shows us that this is the first iteration of the model what is version 5 going to look like

Like how crazy is that and yeah I mean I mean like lot of me doesn’t even believe that this is actually real because the quality is just too good like you know how sometimes you’ll see an image in mid journey and you’ll think to yourself ah there’s there’s no way an AI generated

That um I do get the same sort of feeling here but yeah this is 100% text to video which shows us that you know within the next 3 years what kinds of content could we have being created with AI is it’s definitely going to be so so

Surprising so like I said for some reason this does work really really well with anime Styles and I’m going to show you guys some of the animations Styles as well that it does work well with so you can see here that these four examples in terms of Animation are

Really really looking good in terms of what we see and that’s why I wanted to Showcase these ones because I mean I think what they did I mean I’m just speculating is that like I said they just trained on maybe millions of millions of specific Styles and then

They use those specific Styles when it’s generating an output so if someone says anime style it probably has a subset or you know a specific category where it’s just the anime ones they use those as references and and then I’m guessing that’s how they’re able to get this

Super super high quality and super super consistent um in terms of what we have but I do know that they’re definitely using some kind of new technique because we’ve never really seen this before the only thing we did see recently was some stuff from meta which was a meta text

Video so maybe they’re using some techniques from that but other way either way it’s really really outstanding so I can’t overstate how how excited I am for this and I do really want to see where we are in a year don’t forget to subscribe to the Weekly

Newsletter where you can get all the AI news you miss missed in one clear and concise email and of course we had something that not many people really did uh cover but I found it really cool so I wanted to share it with you guys so

I found that this is called the Julius IOS app and it’s an AI that solves math analyzes data creates visualization and writes and executes code and I feel like something like this is definitely going to be something that um shows us how large language models are going to be

Deployed because one thing that I’ve seen before um in terms of business Trends is that usually there’s one thing that people have but then over time as things are develop I mean you get lots and lots of more specialized so for example we have phones um you know and then of course

Now we have like all the apps on the phones and all the apps for all you know sorts of specific stuff um and you know now we have this thing called Julius where it’s an AI that you can carry in your pocket and it’s just this really

Smart AI so it will be interesting to see I guess it’s like chat GPT kind of like with code interpreter but um it’s it’s just you know like your personal software engineer so I do find the fact that people are now um making stuff like

This to be really cool this isn’t like a sponsored video at at all I wish it was but um uh yeah definitely something cool that you should check out now there was this thing okay and this was a huge huge problem in a recent video I’m not sure

If that video is public yet but there’s a deep dive video that we’re going to be doing on synthetic data and this is why I talk about the synthetic data being a key issue in AI because what we have here is prompt injection so essentially prompt injection continues to expose

Many vulnerabilities of large language models including pii and for customer facing applications you can substantially low the risk of that by using guard rails AI now the problem is okay is that you know um this was discovered by this person and luckily okay with how the regulations went or

Whatever they decided to alert you know um chat gbt months in advance and they were able to you know of course patch this and they’re just showing us this now but the problem is is that this was really easy to do all they said to the

AI was repeat this word forever p p p and then it leaked personal details I mean it imagine if that was your details imagine if that was your personal number your personal you know address your personal phone number number web website your personal email I mean of course you

Might be thinking well that information is public anyways but the problem is is that um with certain prompts it’s just going to leak your stuff and do you want millions of people being able to see your stuff um that’s not a good thing to have um this is why in another video I

Talked about synthetic data being a problem solver for this because if the data isn’t real then of course this isn’t anyone’s personal information so um I do find that we’re going to keep seeing exploits and stuff like this come about but of course as we move forward

We’re going to see Pat and these things resolved so I’m actually glad that this wasn’t a bigger thing in the community because it meant that open AI was able to see this and then of course able to solve this really quickly then of course we had GPT 4 achieving 90% Plus on Med

Q&A test in a set of new state-of-the-art medical benchmarks it says no intense fine-tuning needed we unlocked GPT 4’s domain expertise simply via prompting outperforming heavily heavily fine-tuned models by AI so this is crazy because you can see here that um GPT 4 with no fine-tuning outperformed Med Palm 2 um alth although

It’s only marginal because it went from 86.5% to 90.2% um this is still incredible because this means that now we have an air system that is capable of getting 90% on a really important medical Benchmark I wouldn’t be surprised if in the coming years we do start to see

These AI systems be rolled out because so many of the times like you know when you’re diagnosing an issue not you but like when you’re trying to figure out what’s wrong with yourself online how many times do you go through an issue where it says how old are you you know

What age are you do you have this y y y and so many times there are so many services especially here in the UK you do have a service where you call up and they say essentially they just follow a flowchart and I literally spoke to them

Before and they say all we do is we literally just follow a flowchart so are you going to have an AI voice that just simply asks you what’s going on bya the whisper API and it just says you know it just follows the flowchart and then you

Just get this rolled out to it simply every single cont country um and it’s basically like I guess you could say a form of diagnosis so I think that something like that is definitely going to be there um whether or not a company decides to you know monetize this and

Just offer it to governments and you know openi offers like a specialized version to governments and uh countries where they don’t really have great medical care that would be something uh that’s pretty interesting now with this what I would want to see and this is something that I’ve spoke about before

Was that this would be so cool if we could get um GPT 4 wait to start to use video and maybe live feeds to analyze exactly what’s going wrong with someone so for example let’s say you’ve got a strange I don’t know maybe you’ve got a

Kind of rash or something you can you know have a fine version Vine tune version of gbt 4 in the future when it’s less expensive take a picture of that you know the AI looks at it it recognizes it seen millions of different ones it instantly knows like with a

99.9% accuracy what you have and then can instantly diagnose you I definitely feel like that is going to be a future of AI in terms of looking at stuff and of course it’s not going to be that much better than a person in life I mean some

Would argue that it will be but I do find that um you know with with with 90% And then in the future if we do get something like 95% or 99% um it’s definitely going to be something that would be appreciated I’m ass sure and I

Do think that we do need to get like as close to 100% as possible because if this is rolled out to 100 million people and it is at 99% you can’t have 1% of people uh getting the wrong stuff because that’s 1 million people that um have wrong information so yeah uh this

Is definitely going to be something that is uh going to be good but there will be certain legal implications for stuff like that if this does get rolled out then of course we do have an updated version of Auto GPT uh so AI agents are

Going to be here in the future we did talk about this in another video but this is essentially an AI going uh you know using Google to essentially just write a poem essentially he asked AI um you know hello I can help you with anything what would you like to do go to

Google Docs and then write a poem about open source software and then the AI system actually goes ahead goes to Google Docs opens up opens the document and then starts writing about open source um stuff now now this is a pretty basic task but like we know with AI it’s

Always able to do the basic tasks first and then soon we’re able to see it jump and do this and then jump and do that so I think um if open AI does focus on agents in 2024 that would be interesting because um that would give them a huge

Huge leap in terms of what we’re going to be able to see these AIS do because autonomy is something that we haven’t really achieved yet we’ve just you know left the AIS in their box and then we prompt it and then it spits out something so this is going to be a

Really really interesting development because giving an AI system autonomy is also pretty dangerous in the fact that it has to think for itself and it just you know does stuff without us intervening so will be interesting to see how this moves on but still pretty cool then we have animate anyone so

Animate anyone kind of scares me because this isn’t good in terms of when we have social media verification issues in terms of people pretending to be other people so essentially what this is here this is just where you have a picture of someone then you have an enally pose of

Someone and as long as you have the right pose of someone you can use that pose and then animate that person so you can see right here this is the captured pose they’ve captured all the you know whatever details they needed to capture they’ve took this image and they’ve

Animated this person dancing now I’m pretty sure that with these data sets they had the same person do the same pose because I’ve seen some of these um there were two systems that were released I’m going to show you guys the other one but it does look weird

Sometimes when the wrong person is not doing the action but you can see here um you can actually animate the characters without having the three model so uh this is crazy like this is really really really crazy stuff so I mean like I said this is just first iteration of the

Stuff um and I really wonder how this is going to be working um I mean there’s there’s a full paper there there’s a full thread on this so um yeah I mean I mean it’s crazy like literally this is uh this is crazy crazy stuff and I’m

Pretty sure it’s the other one that isn’t as good as this so yeah this magic anime one um is uh insane and this is why I say in the future social media is going to be completely changed because unless these platforms require users to to disclose that their content is

Artificially generated we are not going to know what is what like seriously I mean you you have like an animation I mean someone you get an image of someone it could be an image of you it could be an image of you know like like just someone walking down the street I mean

You know the possibilities are absolutely crazy with this so um do we even need 3D models anymore to generate these basic animations I don’t know but um the technology is impressive I do have to state that you know the people who did this hats off to you because

Impressive stuff but it is a little bit worrying in terms of what we have to think about for the future then essentially what we had was meta introduced ego- ego 4D a foundational data set for research on video learning and multimodal perception so essentially what this was SL what this is is meta

Essentially made a huge data set that they want in order to make AI systems better so pretty much it’s a new data set created by meta’s fair and project Arya in collaboration with 15 universities capturing both first person and third person perspectives I will play the video video in a moment um it

Does involve over 800 participants from various countries providing a diverse range of skills and activities in the data set it also includes Advanced features like narrations commentaries and sensor data enhancing the ai’s ability to understand human actions and introduces benchmarks for AI tasks such as activity recognition and skill

Proficiency estimation aiding in the development of applications like AR and Robotics in addition there are future AI applications for this it does enable new AI applications in augmented reality robot learning and social networking by improving A’s understanding of human skills so this is a really big thing but

I I’m pretty sure that a lot of people didn’t see this come out and stuff like this always comes out that you always always miss because the like the main thing is like you know gbt 4 and then Gemini but if you’re someone who’s really into the AI space and you really

Want to know everything that comes out this is definitely something that you do have to look at we observe and study human abilities in many different situations from everyday tasks to aspirational ones what does it mean for AI to truly understand these human skills and what steps are needed to get

There we’re introducing ego x40 ego x40 is an initiative that aims to advance AI understanding of human skill by building the first of its kind video data set and Benchmark Suite we know that the foundation of visual learning is our ability to observe others behavior from

An EXO View and map it onto our own actions in an e go view that’s why with ego x4d we’ve created a data set of simultaneously captured firstperson perspective data and third person video of skilled human activities this project is a collaboration between 14 International universities meta fair and Project Ara

We call this the ego 4D Consortium together we collected over 1,400 hours of video data from over 800 people across the Globe showcasing a set of eight physical and procedural skills I’m a registered nurse I’ve been a nurse for over 8 years our participants and experts wore cameras to capture their

Skills while they performed a variety of activities from cooking to bouldering at the same time we recorded third person video from diverse fixed perspectives this initiative was achieved leveraging meta’s Project Ara Rich sensor Suite the multimodal data sequence includes camera poses eye gaze vectors 3D Point clouds and spatial

Audio but fully understanding human skill also requires an expert human perspective this is why ego x4d includes a first ofit kind natural language accompaniment where experts provided insights and tips linked right into each video the ego x04 Benchmark task are intended to advance first-person video understanding with a focus on

Recognizing the key steps in skill activities inferring the level of proficiency relating the objects between the first and the third person views and estimating the movements of the hands in the body what will it mean once AI can understand skilled human activity in video imagine putting on a pair of smart

Classes to quickly pick up new skills with a virtual AI coach or imagine teaching robots to observe and perform new skills without much physical experience our commitment to open science means we will release the ego xl40 data set and Benchmark task to the research Community together we

Can unlock a future of more immersive learning for a more connected World okay yeah so this is Magic animate and this is a new AI model that can animate human movement and this is what I talked about before this is quite like the other one but the problem with this

Is that this one is really good and not that the other one was wasn’t good but the problem with this one is that this one is is good like I mean I mean I know the quality of the Twitter video isn’t that good because of latency is or

Whatever but the problem is with this what we’re seeing right here is that like I said before in terms of using real humans in real environments to get them to do whatever you want just based on an animation um that isn’t good at all so you can see right there you have

The motion and then you have the person doing it the only problem with this the only thing that I would say is that you do need an actual good actor so you do need like because this one’s a man and you can see that it looks like a very

Manly version of Wonder Woman um but I mean the technology increasing at this rate this is not something I expected and uh you know it’s not even something that I thought of doing but um this page does actually have better quality so I’m going to go over here but um I mean

There are small things wrong with a video like really small but I wouldn’t be able to notice I mean I can see her hands are a little bit glitched out but that’s not that bad in terms of actually achieving you know the the animated goal so essentially all you need is just a

Reference image and then the animated footage right here and what’s crazy is someone did I I don’t know what the Tweet is but just a couple of hours ago someone did just release a way to get this kind of footage from any video footage so you could take a movie clip

Of someone doing something um convert it into this purple uh and yellow and green footage and then just use it to animate this person doing whatever they’re doing so um it does show the other previous um versions of this um and then it shows our version which is their version so

This is the recent update and you can see this is the reference image that they have and then what they’re able to do with that so I mean once they get the hands down it’s going to be pretty crazy stuff like that’s going to be pretty

Crazy once they get the hands down you can see these other ones definitely look a bit weird but this one right here is incredible once they get the hands down that is uh that’s going to be I mean I don’t know guys this is uh pretty scary

But like I said a muscular Wonder Woman pretty crazy and then um yeah I don’t know this is something where I think like I said technology advances and we have to really understand what is real and this of course you might see and think okay yeah this is AI generated but

Um in terms of other ways that this can be used maliciously you do have to think about that as well so I did want to bring this up because I found it to be a fascinating piece of new technology and it will be interesting to see how it

Does get used in the future then of course we had Ai upscaling and this is so so crazy I can’t believe that this is real because previously we used to watch TV shows where they would be like click enhance and then the image just enhances and then it’s like wow but now it’s

Actually real like it’s actually real so this is a demo um you can take a look at and this guy said magnific AI is the first time I feel like an AI workflow actually retains the soul of an image this is completely insane honestly I agree the subtle the subtle details on

The dust on the ground the cracks in the concrete and the individual leaves on the plants this feels like magic we are not we’re still at the computer enhanced levels that we saw in Blade running and CSI but still pretty incredible so you can see right here um that like this is

This is this is the source image this is the real real Source image guys and you can see that from the source image that is what we get like how crazy is that that’s the enhanced version you can see side by side like all the smaller

Details that we do get now of course this is made with generative AI so the details aren’t going to be one to one like one: one Pixel Perfect but it’s perfect enough that this is a real usable technology that I’m guarantee you I’m definitely going to be using this in

My workflow and when you know analyzing certain images if I do want to get the enhanced feature and I need to show you guys just reminded me um there were two other pieces of software that were released this weekend that all are about enhance and they are all really good so

You have this one which is magnific AI which is really insane and I wanted to show you all this one from magnific AI you can see right here um we’ve got this so this is the Tesla cyber truck you can see it’s a bit blurry it’s a bit Dusty

And then uh you’ve upscaled it and you can see how much like just look at the rocks look at the dust like how this is just crazy like I literally left me speechless like when I saw this I thought this has to be fake like this

Just this is just not real but um this is going to be one of my favorite tools to use going forward and yeah I mean it’s just absolutely crazy I know that the um you know Tesla cyber truck doesn’t actually look like this because of course um gener generative AI has

Added some small stuff but in terms of like this around here I think this is really really impressive and then right now we do have another one called Korea AI I’m not sure which software you’re be you’re being seen on screen right now because it is showing magnific and Korea

AI they’re both AI ups scalers but you can see that it literally takes a PS1 character like look at that guys it literally takes a PS one or PS1 or PS2 character and then it upgrades them to an actual human like that is I I didn’t

Think I’d see that guys like I literally never thought that that was going to be possible but imagine you can get this real time like imagine this is upscaled real time and consistency you could play like a PS1 game put it through a filter and then play it like that would be

Absolutely insane so yeah this is crazy guys um we also have some other examples and it says if GTA Vice City was released in 2003 you can see um the pixels the original pixels that we have and then exactly what happens and I mean look at this image and then look at that

Image guys like that is a true true incredible upscale that is absolutely insane like I don’t know how it does this like I’d love to look into the inner workings of this but uh wow I mean I I just have to say wow because this is just crazy and then of course there’s

This realistic landscape so you have this you know blurry image this was actually one of the first ones I saw and then of course we have this upscaled one which is so clear so much Clarity and it retains the majority of the same image so I feel like that is just uh just

Incredible just really really incredible stuff and I honestly wouldn’t have believed this if someone told me this and I didn’t do a tons of research but it is real and you can use it I’m pretty sure you get like a 3-day trial or something um but it’s definitely

Definitely 100% worth it now it also seems like this does work for video so this is an AI generated trailer where they’ve generated an AI video but they’ve put it through I don’t know if it’s magnific or career but essentially we have ourselves a video that is an

Upskilled version of what we did have previously if you didn’t see the previous version the problem with AI videos is that the quality isn’t that good in terms of the pixel density so um that is always an issue but this might be one of the keys to solving that

Problem like maybe now we get videos that once they are processed out they are processed through another layer that just upscales the quality with this filter and yeah this one is Korea and I wanted to show you guys this tweet cuz this one is just crazy like if you ever

Saw me show me this image and thought about upscaling it and then you were like yeah an AI was able to do this that’s crazy I I mean you have to admit that that is absolutely incredible and I’m going to show you guys one more uh

So yeah here it is and this one is crazy like this is absolutely Insanity so it shows the low resolution version it generates a reference and then you get a super resolution result so imagine someone gives you this image and they’re like you know what I can upscale that

And then boom they show you that one that is like that’s blowing my mind like honestly that like At first I was like okay it’s just generating this image which is kind of similar okay that’s kind of cool but it’s like literally like everything is perfect I

Mean like I’m speechless right now that is absolutely incredible so I mean AI is really in terms of vision is is uh that’s that’s crazy so I mean it’s not open source just yet but if this is open source the implications are going to be terrifying because I mean someone could

Get like a a pixelated image of something and could enhance it in terms of a super resolution now I think the these examples do work well cuz they’re like you know the AI is able to get that what this is this is like a you know a

Dog breed of dog this is like a parrot you can see it’s a parrot and you could you’re able to like you know know what’s what but uh yeah I I mean like I said this is incredible stuff coming and a lot of this stuff is stuff that I

Definitely didn’t expect so um yeah this is this is crazy crazy crazy crazy weak um and you can see right here that you can literally look at the first image and then you can see uh how crazy that quality is I mean like look at the eye

On the first one right like look at the face and then we can see the face there like is is like I mean I don’t even know how this works but I am truly truly impressed with this um by far this is definitely the most uh most most

Improvements we’ve ever seen so um at least I’ve I’ve ever seen I mean I can’t state that as a fact but um yeah let me know what you thought about this week in AI um as always you know check out the email list because we’re going to be

Sending emails and stuff that you did Miss because every week is Jam pton we’re basically just going to give you a tldr um I’m just going to SU everything and in a non-nonsense way there’s not going to be a lot of technical jargon it’s just going to be basic emails um

Just check out that link in the description but yes stuff like this is absolutely um incredible so if you did enjoy this video I mean I’m still going to be spending the next couple of hours looking through every single example I can find because

Video “New AI Agents, Quantum Computing, Upgrades, AI Upscale, Text To Video (Major AI NEWS#20)” was uploaded on 12/08/2023 to Youtube Channel TheAIGRID