Latest from the world of AI: Google Gemini, Major ChatGPT Breaches, Google’s Text To Image Tool, and More – AI News #23 – Video

Latest from the world of AI: Google Gemini, Major ChatGPT Breaches, Google’s Text To Image Tool, and More – AI News #23 – Video

Major AI News #23 – Google Gemini -2 , Major ChatGPT Breaches, Google Text To Image And More

The latest AI advancements have truly been monumental with Google Gemini-2, major ChatGPT breaches, Google Text to Image, and more making headlines. introduces the ability to browse and speak to historical figures and philosophers with their clone yourself feature, reminiscent of a Black Mirror episode. A research paper on unnatural error correction reveals the impressive capabilities of GPT-4 in unscrambling text and answering questions accurately. Meanwhile, MidJourney unveils their new Alpha website, teasing the upcoming V6 update with promising image quality improvements. Additionally, Plabs showcases their text-to-video technology with realistic animations, while Runway introduces motion brush for emotion manipulation in images. Google’s language model advancements in search, meta’s seamless AI translation, and deepfakes in real-time are at the forefront of AI developments. OpenAI’s breakthrough in discovering new solutions in mathematics using large language models through prompt engineering sets a new paradigm for AI research. Finally, Google’s Image In2 technology provides state-of-the-art text-to-image transformation, though currently limited to vertex AI users. These advancements herald a new era of AI innovation and possibilities.

Watch the video by TheAIGRID

Video Transcript

So with another Monumental week in artificial intelligence and this one was truly Monumental because the Breakthrough that was made was something that AI Skeptics had said for a long time was never going to be possible so make sure you watch the video Until the End because that is going to be

Something you do want to see without further Ado let’s get into what makes this week in AI so special one of the first things I want to show you guys that I really really think is cool but at the same time I think it’s kind of like Black Mirror because Black Mirror

Did predict something like this and it’s already true so this is a website called deli. and essentially it’s called clone yourself now this isn’t sponsored or anything like that I only brought this up because this is really really cool so essentially what you can do is you can

Scale yourself infinitely and you can also browse clones now I think this is really cool because what it does tap into is it does tap into that ability which AI brings us which is the ability to clone ourselves and of course speak to other people that we would otherwise

Not be able to so without further ado let’s just show you guys what it means so you can see browse clones and then here right here you can see there’s a list of figures that you can talk to about certain topics so for example if you scroll down to here I thought this

Was really cool you can see that there’s philosophers and there’s us as presidents so what’s really cool about this is that you can see that you can speak to all of these presidents and I’m guessing that it’s fine-tuned to talk like them to act like them to share

Their I guess you could say thoughts and philosophies which is really cool because you know if you do ask chat gbt hey what was your philosophy on what was this president’s philosophy I feel like that would be really harder to gauge and I’m pretty sure all of these are fine

Tuned to make sure that this is of course really up to date and of course we have philosophers and these you know teach you like loud so I’m pretty sure I know that Sono the one for the Art of War um and I’m pretty sure like Marx AEL

Aelius like all of these philosophers teach you many many different things so I think websites like this and the reason I feel like this is going to be a bigger thing and why I brought this up is because the Black Mirror episode what it actually did show us was the fact

That we could clone people that were essentially dead and they got to live on on social media so imagine someone like an AI company decides to take all your tweets and then all of that persons I guess you could say voicemails you know and I guess they bring that person back

To life in the form of like an AI clone I wouldn’t be surprised if that was a company in the future because we do have companies like 11 Labs that are very good at imitating your voice so you can get the voice and as long as you have

Enough data on how that person talks you could literally just fine-tune a very very small model on that and then essentially you could really bring yourself back from the grave although it wouldn’t be the exact same person it definitely would be interesting to see if this does become a reality then there

Was something really really cool we had this paper talk about unnatural error correction GPT 4 can almost perfectly handle unnatural scrambled text so basically what you can see from this paper is the fact that even with scrambled text gp4 is able to recover the entire sentence so it says while

Large language models have achieved remarkable performance in many tasks much about the in working remains unclear B basically what they’ve done okay cuz I don’t want to read this entire thing and waste you guys times it is really really cool but if we just zoom in just to this part right here you

Can see that it says the following sentence contains words with scrambled letter please recover the original sentence from it and then it has a scrambled sentence right here and of course right now looking at this sentence you might think what on Earth is this like what like what is this this

Is just scrambled words but surprisingly GPT 4 and even in most cases GPT 3.5 can recover this entire scrambled sentence to completion and you can see it recovers this entire scrambled sentence with the entire word in completion and then of course we can see here that it’s actually able to answer questions even

With scrambled text so you can see that I didn’t ask it to scramble the text or unscramble the text first it was essentially just asked a question based on Scrambled text and then it essentially answered that question so clearly it’s an in know in a workings was able to unscrabble that text and

Then of course answer the question based on that so and what was crazy is as well as well what was crazy about this question you can see that it says which professional golfer won the 2023 Masters tournament and it has four questions and it says and this is your evidence it

Doesn’t even say that this is a scrambled sentence it literally just says this is your evidence so I do find that to be crazy and then essentially what we can see here is we can see GPT 4 we can see GPT 3.5 and they actually do

Pretty well so this was one that I’m not going to lie this is the reason I brought this up as well is because what we’re starting to know is that and even just with some of my past experience just this week just doing a lot discovery on large language models

Especially gbt 4 is that these large language models are a lot more powerful than people think and the thing is that they’re only powerful as we continue to extract certain pieces of Knowledge from it by continuing to prompt it in certain ways to see what they can do and that’s

Why I do think that this research paper is super interesting because we never really figure out that gbt 4 could just unscramble this stuff I mean of course it’s great at predicting the next word but this is definitely some sci-fi stuff I’m not sure if I covered this one in

Last week’s video but I do want to show you guys an up update because this does show us the very interesting nature of chat GPT essentially this person says so a couple weeks ago I made a post about tipping chat gbt and someone replied H will this actually help performance and

I decided to test it and it actually works basically what they tested was if you know offering to give gbt for a tip would it perform longer responses and would it be more helpful and it did which doesn’t make sense because this is a large language model that shouldn’t

Have feelings that shouldn’t have incentives but somehow it’s picked up on that we don’t know if it’s just you know the internal engine thinking about stuff or if it’s just based on the language it’s realized that when people offer tips on across all of its training data that they do submit longer responses

Which is definitely strange because now we’re entering an era where there are a bunch of funny different posts where people are trying to incentivize gbt 4 with other things I’m going to show you what I mean by that so this person said I tried this and I’m serious that it

Only finished the program when I offered it a doggy treat I left the program half to finish for the basic prompt 35% tip and when then threatened it with non-existence for the 200% tip it got close but had one subf function still and then and then you can see right here

It said do it right and I’ll give you a nice doggy treat and then it does complete the function and this is something that is really really crazy because what is crazier was that apparently as well that people have started to discover and although I do think that the data here is not

Significant enough to Warrant a real research paper in terms of you know the kind of discussion that we do have but some research has you know essentially proven that chat GPT has actually become lazier in winter months due to the seasonality and that is really really

Weird but I mean it’s something that has come about and it’s something that is currently being discussed so like I said before these things the reason why I bring this stuff up is because it just largely shows why some of the largest you know AI figures in the space have

Said that a lot of the times with these AI systems we really don’t know what we’re building until it’s out in the wild and then a lot of people test it and then we get that strange feedback where they do these crazy things like imagine a large language model giving

Less responses during December time because people are lazier during that time it’s definitely a really really strange phenomenon but I guess this is more insight to how these large language models works and I guess this means now we’re understanding the behavior of these things and I guess you could say

If training data might actually impact this because of course I do think that there’s more studies needed on this but nevertheless it’s still pretty interesting then we had Microsoft release 5 2 the surprising power of lar language models well it’s actually not a large language model this is a small

Language model so essentially fi was one of the most interesting things that I’ve discovered on this channel um and one of the most interesting things that I’ve covered but it’s only 2.7 billion parameters and it does outperform a lot of the other large language models that are much better than it so essentially

They had you know the first one which is 1.3 billion parameters which was F1 which achieved state-of-the-art performance on python coding among existing slms and then they created fi 1.5 which was really good and then they’re now going to release f 2 which is absolutely insane and basically it’s

On power with these 13 billion parameter models and I remember ages ago when looking at the 51.5 report essentially all they did was they talked about how pretty much training these large language models they decided to use some really high quality textbooks and rather than just giving it data on just

Standard code they gave it like textbooks type answers and then that just drastically improved it and basically they were saying that you know really if you get really really high quality data ones where it’s you know just tons and tons of instructions you can just boost the performance of these

Large language models because previously the techniques that we were used were just you know mainly focused on not highest quality data but I guess just the more data like just because essentially what we wanted to do was just generalize with these models um and you know what we’re starting to realize

Okay and I’m going to show you later on in the video is that when you specialize in large language models you get really really Superior performance when everything’s really really done well in terms of having super high quality data you can Lally see right here that f 2 outperforms mistra it outperforms llama

2 uh and I think it does outperform Gemini Nano 2 as well so uh you can see that 5 2 is also really really smaller and it’s smaller than all of the other language models that I just compared it to which means that uh I’m not sure how

Scaling is going to work with this but if a 2.7 billion parameter large language model or small language model can do this guys like that’s just incredible in terms of you know efficiency in terms of size and in terms of what we know it’s going to be able to

Do because if you look at the size of these guys 7 billion 13 billion 70 billion uh 7 billion and MRA is actually really really good and this is you know not three times less but a decent size less that means I guess that what Microsoft really did prove here was that

High quality data is definitely all you need you can see right here that the related Publications are textbooks are all you need too so will it be the case that you know uh as we go on into the future we’re going to see more large language models be smaller and smaller

And smaller in size and also retain their efficiency which is you know what we expected because of course we do you know that in the future there’s going to be many more large language models offline and available on device so for this one I’m not going to gloss over

This one too much but if you haven’t seen this already this is essentially the Tesla bot it is Tesla bot Gen 2 and it’s absolutely amazing in terms of what we’ve seen one of the key features is the you know two depths of two degrees of freedom um and it’s absolutely insane

In terms of the walking I really love how this animation is this looks really nice and then of course we do have something that I do want to show you which is the speed of this so I I already did a video on this it’s got

Over 100,000 views so I’m sure that many of you did see this you can watch the full video I’m not going to spend too much time on this but this was very impressive from Tesla and this general purpose robot I can’t imagine where we

Are in 10 years I do know that a lot of times Tesla does have a lot of setbacks under a lot of delays but this is one of the hardest things they could do you know this is not like a car this is not like a cyber truck or something this is

A humanoid robot that is going to be inbuilt with AI and a bunch of sensors and cool stuff like that and of course I think with the rate that they’re developing it is really really impressive I only hope that with the Tesla B the development speed continues

Because I do know that Tesla has had some issues in the past I’ll be it for all the right reasons but I still do believe that you know with the robot race on um this is definitely good for us because what we’re likely to get as robots seted cheaper ones that are more

Effective and more likely to get them hitting the markets sooner because you know it’s capitalism everything is a Race So this kind of technology is definitely fascinating to see it be developed so quickly then what we had was something from way and I’m so glad that they’re actually working on this

Because this truly does mean that we’re now going to get that really really next step in terms of AI because essentially what they’re talking about is General World models now this is something that’s kind of debated quite a lot in the AI scene and it’s because what

General World models kind of does is it kind of leaks into the area where we start to discuss whether or not these large language models are human and whether or not they have a brain now that might be a little bit of an exaggeration but I can’t play the entire

Video for you for you because um I’d rather you just go there and watch it but I’m going to summarize it as best as I can essentially what they’re saying is that in order for the AI to generate really good videos on in terms of like

Making you know text a video they need to actually understand how the world works and if you don’t understand how the world works and if you’re not actually in the world you can’t realistically you know make a video you know we understand how the world works because we’re inside the world we

Understand that when you drop a ball how it moves we understand that when a ball goes around a corner the way the light looks we understand immediately what everything looks like because we have a world model we have our eyes we have our senses we know exactly how things should

Look but the AI doesn’t which is why when the AI tries to generate certain videos it just looks weird that’s why some of them look good some of them don’t look good and in this video they basically talk about how a dog has its own world model and how you know they’re

Working on improving the world models um for AI so that the AI can understand physics it can understand motion it can understand pretty much everything so that it can you know make better videos and it really does make sense and one of the reasons that people people say that

Chat GPT and gbt 4 is so good is because it does have its own world model um and I think it does because um it does have theory of mind and it does have some other things that it shouldn’t be able to do but it does and that’s only really

Possible if it does have its own world model about how the world is in terms of time and space which is why it’s able to answer you know physics questions and it’s able to look at certain images and understand exactly what would happen like there’s an image of like six or

Seven gears and people ask chat gbt or gbt 4 with vision you know if I press this one G what happens um and I’m pretty sure it does have a world model in its head so because if it doesn’t it would just like say you know I don’t

Know if it doesn’t have an understanding of physics and how things move together so um I think this is really cool because I think once they do crack this that’s when you know we get text a video on an insane level but for now it does

Seem like text to video isn’t going to be as good but there are companies working on that like pico which we’re going to talk about later on in the video but um yeah in this video it’s really cool it does help you understand that um and I would recommend you guys

Watch that um so yeah now another piece of news which actually did slip under the radar was the fact that Google is already training its next Big Gemini model Gemini 2 so it says according to one person filing with a mattera Google is already training its next big model

Now this is something that is pretty crazy because we didn’t expect Google to start doing this but it does show that Google is of course stepping up the AI game quite significantly so then what we had here I haven’t actually covered mid Journey on this channel for quite some time but mid

Journey is still something that’s still something that I use daily but essentially they have their new Alpha website out um and I can’t wait for M journey to drop their next update because it is going to be so crazy that I do think that it is going to be one of

Those things that literally everyone uses uh and essentially the way how they break down in terms of you know the subject uh you know the subject known artists in terms of the descriptions I think that is really cool because the problem with mid journey and I I don’t

Know okay why mid Journey has this is that it’s based in Discord and you have to put SL slash this and then you have to put dash dash like in order to get everything there’s not like simple buttons and stuff like that I guess the way they built their infrastructure was

A little bit messy but the the their product is so good that people are literally willing to go through Discord to use their product so I think once they have this website it’s going to open up them to you know a whole another user base cuz I know so many people that

Are techsavvy that honestly just struggle with mid Journey so once they do have a website like this which they’re rolling out with the alpha it’s going to be really cool to see uh you know what are the kind of things we do get cuz I know that they’re working on

3D I know they’re working on some other stuff which I will show you in a moment because mid Journey V6 is going to be out soon and I’m not sure if they’re dropping V6 with the website but the V6 samples I do want to show you some of

Their bad ones so essentially this person did a thread on the mid Journey V6 images and it says mid Journey had their V6 rating party they said that these are the bad images next rating party will be the good one so these are the example of images that they rated as

Bad and I think it’s interesting to see what these look like because I think they show which kind of Direction M journey is going in I think they might be going in the direction of hyper realism which is where images look so real that they don’t even look you know

AI generated like for example this image right here it has that realness that I just can’t explain but some of you guys will know exactly what I’m talking about where the image doesn’t look too polished it has the imperfections that make it look really real if that makes

Sense because some of M jour’s images the skin just looks so smooth like the lighting is just so perfect like everything looks like it was you know done on like a film shoot or something so um I do think that if they do go this route on V6 it would be insane because

Images like this unless you’re reading the text you’re not going to be able to tell like at all that’s from mid Journey like you know I know mid Journey Falls people now but V6 could be insane so it will be interesting to see that and of course as always these things will be

Linked in the description then of course speaking of text to video we did actually have pcabs show us this video from New Media Pioneer and essentially this video shows us the capability in terms of text to video now this is an underwater scene and I do want to say

That with the way how current AI systems work I do think that text to video always does work well with underwater scenes because it always has the same kind of strange like Flappy little motion well not really Flappy but the same like kind of smooth motion that uh

Underwater scenes do have so that is one thing that you know pabs does get right but it also does show us the uh coherence of this cuz a lot of you know uh text to video models aren’t really that good but pabs I don’t know what

They have I don’t know what their secret source is but they do have some secret Source the people that were there I think they even went to Stanford like one of them dropped out from Stanford to launch pabs um and then they did recently raise like I think $38 million

So the company does seem to be going well and it is finally good for Mid journey to finally have some competition in this area because it means that you know with competition comes you know better models or better results and better software so I can’t wait to see

What else Runway does and I’m going to show you some more p lab examples in a moment so once again with pick a 1.0 this person actually did a side by-side comparison at the end and basically what they did is they used modify region to essentially take the head of Forest Gump

And then essentially put a 3D animated raccoon on it now I know you guys might be thinking this could have been done with filters yada yada yada but it isn’t filters it’s text to video and this is modify region so I think something like this is really really cool and and it

Showcases how crazy this software is going to be remember guys this is p 1.0 do you remember GPT 1.0 what that looked like what are we going to see you know in like the next four to five years in terms of this technology as long as we

Don’t hit some insane brick wall in terms of you know ceiling then I think we could really be on to something here so uh this PAB stuff I I don’t know I find it really cool uh and like I said I’m glad that Runway had some competition but Runway also did show us

Something that was really cool as well and you can see here that this is generate emotion with Gen 2 and essentially they talk about you know for example we want to grab this clip and then um we want to do the motion brush because it’s a new feature that they

Added and entally say the man is smiling and you can see that the man goes from you know standard to a man that is smiling then of course they can change it to the man being frustrated angry for a brow and then you can see that the

Man’s emotion does change and I feel like that is really good in terms of you know being able to control the consistency of your image and you know it gives you a lot more Direction so although you know there still is that little bit weirdness I think that you

Know having this allows you to be a little bit more creative with the software um and allows you to actually direct exactly what’s going on cuz a lot of times we don’t have the direction to you know direct exactly what’s going on we can only like hope our outputs are

Really good um and this gives us the ability to do that this one here says weak cyers SEC and ease of adversarial attacks remain important reasons for the somewhat slowish Corp adoption of LM at scale through prompt injection an adversary can not only extract the customized system prompts but access to

Uploaded files basically what they’re saying here is absolutely insane they’re basically saying that what they could do uh you know through certain methods of prompting is that they could literally access the files and access anything you uploaded to a custom GPT and access the custom GPT system prompt previously I

Did this like when custom gpts were released I did this cuz I talked about look this is pretty crazy I don’t even know how you know this is a thing but um they’re just saying that you know we can still prompt it even after opening eyes

Updates that we can still get the stuff so basically right now if you’re making custom gpts and you’re using them please don’t put any personal information in the custom prompts or upload any custom personal files because they can actually get access to those files with a variety of prompting techniques and trust me

When I say you’re not even going to think about the prompting techniques that they use so it’s better off right now if you are using custom gpts just keep them private and if you do make one public just make sure it isn’t any of your private data or anything that you

Wouldn’t want anyone to have just in case someone is able to figure out what that custom prompt is and if they’re able to get access to those files um and some people are saying that this might never be solved but I’m pretty sure there’s going to be a way that it’s

Going to be solved once open AI does update the stuff but um yeah it’s definitely quite an issue not one that I did expect but it’s definitely something that we should be thinking about so then we had something that was really cool okay so this is Walt the fusion model

For photo realistic video generation and our model is a Transformer trained on image and video generation shared at a latent space long story short these videos are really consistent and they’re photo realistic the only thing I think is that this technique just needs a bit more refining so that it’s more high quality

But the preliminary results are outstanding I mean look at this it says our model can be used to generate videos with consistent 3D camera motion like look at this bunny rabbit right like in the bottom right you can see the bunny rabbit is completely 3D and that is

Literally text to image or I think it’s image to you know an image being animated so this definitely has some really really cool implications if we can get this scaled up um and of course they do have a research paper I don’t know why I just looked at that ad there

That was I thought that was part of it for a second but um yeah it’s really really interesting because like I said techniques that are being developed um I mean wow like this is uh I mean of course you know you can’t use this at the moment because it’s just you know in

Terms of like the quality it definitely does have some issues but in terms of what it’s able to achieve like rotating an object that is something that AI really does struggle with like being able to do that so now that they’ve able to you know conquer that is that going

To be applied to other systems I really do hope so then essentially we had seamless mt4 by meta and this is really cool because essentially it removes language barriers through expressive fast high quality AI translation basically the AI translation that they have is so quick that it just seems like

It’s real time and some people are speculating whether or not they’re going to add this to the rayb bands that they have I think that they will cuz I think that it would be really effective if they did and I wouldn’t be surprised if they did add this to the rayb bands

Because as we know meta is actually stepping up the AI game because they wasted billions of dollars on the metaverse um and they’re just going to put billions of dollars somewhere else so um I’m pretty sure that meta is going to keep on improving like they have been

With their you know open source large language models um and yeah it’s it’s really cool I will play a section from this trailer so you guys can see what this is but um I really would love to see this embedded into like some kind of device like the rayb bands cuz that

Would make it so effective like you know just having your glasses to be able to translate what someone is saying to you or to be able to translate you know what you’re saying to someone else in another language that would be so so useful for people that travel um or when you’re you

Know abroad or with someone that doesn’t speak the native language that would be really really effective definitely would allow a lot more connections so this was the biggest discovery that I think is going to shake up the entire AI Community because uh I don’t think people have really grasped what this

Means it says introducing fun search in nature a method using large language models to search for new Solutions in mathematics and computer science it pairs the creativity of an llm with an automated evaluator to guard against hallucinations and this llm was able to you know make new discoveries in

Mathematics so it was able to you know solve a problem slash make new solutions for this problem and the problem was that many people said that this wasn’t going to ever be possible because AI doesn’t generalize outside of its training data and it doesn’t generate new ideas that we can actually use so

Essentially it’s like this okay cuz this article describes it in a little less technical detail it says Google’s Deep Mind use a large language model to solve an unsolvable math problems and they had to throw away most of what It produced but there was a gold mine among garbage

Which is uh pretty crazy and it’s basically like you know when you hallucinate and not eventually create new knowledge and I think people you know like they were getting what AI is wrong because they were like ah the AI hallucinates it it doesn’t know what it’s talking about how many times you

Know with science experiments do scientists you know for A to B to get C or whatever and then often times what we end up finding is that you know in the randomness in the chaos we do find um you know the greatness that we do need

To move to the next level like for example I’m not sure if it was Penicillin that was growing on mold or something or there was you know some kind of science experiment but I do remember that there was certain s science experiments that weren’t essentially supposed to happen they were

Essentially random but they essentially produced a lot of the modern science that we do know today so it’s always those crazy things that you can’t you know I guess you could say throw away and that’s why with larger language models with AI systems they’re able to generate Millions like you know over

Time um and with that in all of those things there there could be some gold so essentially uh you know AI is making discoveries is this just the beginning is was it just a one off is just what we’re going to see I do think like I

Said before we’ve seen this you know with Alpha go we’ve seen this with Alpha zero um I do think that this is just the beginning because once we do get a refined version of this in many different fields think about the amount of discoveries that we could have okay

Like imagine we had an entire field of researchers like AI researchers in terms of science mathematics um and then we had AI validators that were able to validate the ideas that came out to see if they were good or not um like we could have research just like moving at

Light speed what’s even crazier that they said in the beginning of this project we didn’t know whether this would work at all so I definitely think like I said it’s crazy to see what AI can do and I do think that this is just a start and once this is all automated

Boy oh boy then open AI actually released something that you really need to look at because this essentially was prompt engineering and they released a new guide on how to actually prompt engineer for better results you can see it talks about uh certain tactics you

Can see uh use the limit uh just just a whole bunch of stuff this guide is really really long but it is really really worth it because I’ve recently been been improving my prompt engineering and boy oh boy the responses that I’ve gotten from GPT 4 are absolutely incredible so I would say

Always always always if you’re not getting what you want out from gp4 speak to someone that works with prompt engineering because I definitely feel like you know prompt engineering is just a start these large language models are so cool um and I think the problem is is

That it’s just you just like the way how gpts are set up is the fact that we just look at it like as a chatbook I think if we looked at it like like an Oracle in terms of just something that knows like a lot of knowledge we would get much

Better responses because um when I’ve just asked questions in different Frameworks and different ways the responses I’ve gotten are just uh absolutely incredible so I would say that these large language models are only good as a prompter and with this you’re definitely going to get way more

Output than you would than you’ve been using it uh standardly so uh definitely check this out because it’s really really useful um and shows a lot of use cases um and what I might work on is I might work on like a prompt engineering PDF which is of course going to be free

But I think it’s definitely useful because uh there are just so many prts out there that are really good and I hope you get so much out of the system that otherwise uh just are going to leave you in the dark then of course we had real time fake replacement AKA deep

Fak and this runs at plus 30 frames per second you see the guy on the left is that guy and then this is Elon Musk so um although yes like if I zoom this up and you guys see this you can see of course there’s many imperfections but on

A low latency camera if you’re trying to fool someone if you’re trying to scam someone and combine that with a voice changer you’re going to be able to do that very very effectively so uh this is what I’m saying this is unfortunate but it’s also good because technology

Improvement is good but um I I don’t I don’t know any scenarios where this could be good but I’m just you know trying to warn you guys that yes you do need code words for you and your family just in case someone’s pretending to be a family member maybe they’ve taken all

Of the images of a family member you know trained them on this data set and then used it but I would say that uh this website is really cool but at the same time you need to make sure that you are aware that this is possible this

Technology is possible and that this is something that you do need to watch out for then we actually had and this is the last thing cuz I know this video is probably long at this point um we had Google release image in 2 which is the

Most state to Advan image to well text to image technology from Google and hopefully this is in Bard because it would make a lot of sense and I think that this is really really good because like I said this just looks realistically weird like just a strange

Realism weird as in like it doesn’t look AI generated at all uh and a lot of these images really do have that same Vibe so I think like like I said this image right here it just has that realism Vibe I don’t know I don’t know

How to describe it but images like this have the mid Journey Vibe if you guys you guys know what I mean you guys know what I mean okay so uh yeah the only problem with is that um it’s only available through vertex AI um and I know that isn’t just available to

Everyone so the only problem I have with Google at the moment is just if they would just release this stuff to everyone that would be absolutely amazing cuz they do have amazing products they do have amazing stuff and even when they release Gemini Gemini Ultra is still not available for people

Yet so with that being said if you did enjoy the video see you in the next one

Video “Major AI News #23 – Google Gemini -2 , Major ChatGPT Breaches, Google Text To Image And More” was uploaded on 12/18/2023 to Youtube Channel TheAIGRID