“The AI that led to Sam Altman’s dismissal: Elon Musk, Ilya Sutskever” – Video

“The AI that led to Sam Altman’s dismissal: Elon Musk, Ilya Sutskever” – Video

Technology and AI have brought us many incredible advancements and have the potential to revolutionize the way we live and work. However, the recent firing of Sam Altman from OpenAI following the discovery of a dangerous AI has raised concerns about the potential risks associated with superintelligent systems.

The video showcases various AI-generated images, demonstrating the capabilities and limitations of artificial intelligence. While AI has shown promising skills in tasks such as image recognition and problem-solving, there are still significant challenges in achieving superintelligence.

The concern for the risks associated with AI stems from the potential for misaligned goals and unintended consequences that pose a threat to humanity’s survival. As AI development continues to progress at a rapid pace, experts are beginning to recognize the need for global oversight and governance to ensure that these technologies are developed and used in a safe and ethical manner.

The firing of Sam Altman from OpenAI and the subsequent turmoil within the company highlights the tensions between maintaining ethical missions and pursuing business goals. The race to develop superintelligence has raised questions about the potential risks and consequences of prioritizing speed over safety.

As AI continues to advance and the stakes become higher, it is crucial for leaders in the field to consider the ethical implications of their work and prioritize the safety and well-being of humanity. The future of AI development will require a balance between innovation and responsibility to ensure that these technologies are used for the benefit of all.

Watch the video by Digital Engine

Video Transcript

One of these incredible AI answers shows what many experts believe is key to super intelligence. What’s this? Given the playful nature of the image, this could be a humorous video where the topiery figures come to life. What’s this? Forced perspective photography. The person in the image is actually lying on the ground.

What’s this? 3D street art of a scarlet macabre in mid-flight. An optical illusion makes the bird appear to be flying out of the wall. The person sitting on the ground adds to the 3D effect. And what’s this? There’s a humanoid robot in a cowboy hat in a shooting stance. In the background,

There’s a Tesla Cybertruck and the robot is engaged in target practice. This scene seems to be staged, created for entertainment given the surreal and unusual elements, such as a robot wearing humanlike accessories. What’s the advantage of this robot? The robot appears to be designed for materials transport.

It can help reduce injuries by handling heavy materials, work without breaks, and optimize space usage. And what’s this? A humanoid robot, branded Tesla, is performing a delicate task. There appears to be a graphical overlay displaying pressure data for the robot’s thumbs and fingers indicating the points of contact with the egg.

The robot is designed with sensors to manage grip strength, and dexterity. What’s this? It appears to be a flight attendant exhibiting an exaggerated facial expression of shock, surprise, or possibly part of a humorous entertaining act for the passengers. What’s this? A train has overrun the track and is being

Supported by a sculpture of a whale’s tail. Now here’s where the AI fails. Two missiles speed directly towards each other at these speeds, starting this far apart. How far apart are they one minute before they collide? Eight hundred and seventeen miles. It shows the calculation and it’s nearly perfect, but not quite.

With art and language, tiny variations like this are natural, even useful. But maths is precise, it’s right or wrong. Ai uses neural networks inspired by our brains, but there’s a missing piece. Around 95 % of our brain activity is unconscious and automatic, much like AI. This enables us to function in a complex

World without being overwhelmed by every sensory input or internal process. Sometimes it’s not enough. You’re being naughty, so you’re on a note-aid list. No, I’m not. I’m on the good list, actually. You’re not because, you’re not because you ain’t being good. I am on the good list.

Our immediate reaction to these images is automatic and inaccurate, like the AI that created them. You can see the fuzziness in AI-generated videos like these. It’s very impressive, but the accuracy drops over time. Like humans, AI is learned by adjusting the strength of connections between neurons.

But we have an incredible trick that AI is missing. From our neural network, we can conjure up a virtual computer for things like maths, which require precision. Experts are trying to recreate this with AI so it can think like a human. It could then conduct AI research like humans, but at greater speed.

And each time it gets smarter, the speed would increase, creating exponential progress. Why do you think Sam Altman and other AI leaders agreed that the risk of extinction from AI should be a global priority? The pace of AI development may outstrip our ability to understand and govern it.

Ai could pose an existential risk if it develops goals misaligned with human survival. The CEO of an AI firm was fired after staff warned directors of a powerful AI discovery that could threaten humanity. One of the directors was also the chief scientist and the world’s most sighted computer scientist.

The situation indicates a potential clash between the company’s leadership and its scientific or ethical vision. Days later, the CEO was rehired after staff threatened to leave. As we get closer and closer to super intelligence, everybody involved gets more stressed and more anxious, and we realize the stakes are higher and higher.

I think that all exploded. The AI firm was growing rapidly and was in the process of a share sale that would value it at around $80 billion when the CEO was fired. A lot of shares were owned by the staff who threatened to quit.

The rapid revenue growth and the share sale might have introduced tensions between maintaining this ethical mission and pursuing business goals. The original directors were there with the mission of safe AGI that benefits humanity. Openai has been known for its emphasis on safe and beneficial AI,

And any perceived departure from this could raise questions among the public. It seems AI leaders have calculated that the most important thing is to create AGI first. Some leaders believe that developing AGI first will allow them to ensure it is used ethically and for the benefit of all

Humanity rather than for harmful or controlling purposes. However, this race mentality can be a double edged sword. I named it OpenAI after open source. It’s gone from an open source foundation to suddenly it’s like a $90 billion for profit corporation with closed source. I don’t know. Is this legal?

It is indeed a significant shift and ironic given the original ethos of open source and transparency. There’s a strong incentive for companies to capitalize on their investments. Sometimes keeping the source code closed is a decision influenced by the potential risks of misuse. I’m not supporting Musk or OpenAI.

They’re part of a complex, fascinating story. Musk once told Hasabis of his plan to back up humanity on Mars, and Hasabis said that this would only work if AI didn’t follow us. Speechless, Musk invested in DeepMind to create safe AI.

When Google and Facebook tried to buy the firm, Hasabis insisted that it must be overseen by an independent board of ethicists. Sounds familiar. Google successfully bought DeepMind for $650 million. Angry at being out of the company, Musk started OpenAI with Sam Altman, and Ilya Sutskever, poached from Google. But once again,

Big tech essentially pushed Musk out when Microsoft invested in OpenAI. Amodei and some colleagues left OpenAI worried about safety to form Anthropic AI. And later, when OpenAI’s directors fired their CEO, they offered the role to Amodei, suggesting the two firms merge. Instead, the board was replaced and Altman reinstated.

Money has continually overruled safety. Sutskever is reportedly hard to find at OpenAI, and it’s unclear if he’ll stay. Altman wants him to, and he faces the tough choice of pushing and shaping the most advanced AI or leaving it to others. Is it better to have a seat at the table?

The OpenAI drama is often painted as doomers versus utopians, but the truth is more interesting. Sutskiver recently spoke of cheap AGI doctors that will have all medical knowledge and billions of hours of clinical experience, and similarly, incredible impacts on every area of activity. And remember, Altman agreed that the risk

Of extinction should be a global priority, but some must believe that the world will be safer if they win the race to superintelligence. It’s the race to release more capabilities as fast as possible. And to put your stuff into society so that you can entangle yourself with it. Because once you’re entangled,

You win. Optimism on safety has plummeted. The things that I’m working on, reasoning, is something that could potentially be solved very quickly. Imagine systems that are many times smarter than us could defeat our cybersecurity, could hire organized crime to do things, could even hire people who are legally

Working on the web to do things, could open bank accounts, could do all kinds of things just through the Internet, and eventually do the R&D to build robots and have its own direct control in the world. But the risk is negligible, and the reason is negligible, we build them, we have agency.

And so, of course, if it’s not safe, we’re not going to build it, right? Just months later, this point seems void. Throughout history, there’s been bad people using new technology for bad things. Inevitably, there’s going to be people who are going to use AI technology for bad things.

What is the countermeasure against that? It’s going to be the good AI against the bad AI. Question is, are the good guys are sufficiently ahead of the bad guys to come up with countermeasures? Benjo says that progress on the System 2

Gap made him realize that AGI could be much closer than he thought. And he said, Even if our AI systems only benefit from human level intelligence, we’ll automatically get superhuman AI because of the advantages of digital hardware: exact calculations, and knowledge transfer, millions of times faster than humans.

Deepmind is working on the same bridge. Alphago is a narrow example of an intelligence explosion. It quickly played itself millions of times, gaining thousands of years of human knowledge in a few days, and developing new strategies before defeating a world champion. And Hasabis says Google’s new Gemini AI

Combines the strengths of AlphaGo type systems with large language models. Google says Gemini is as good as the best expert humans in all 50 areas tested. Its coding skills look impressive, and it solved a tough problem that only 0.2% of human coders cracked, requiring reasoning and maths.

We’ll get access to Gemini Ultra in 2024. Elon Musk believes OpenAI may already have achieved recursive self-improvement. It’s unlikely, but if they had, would they tell us? I want to know why Ilia felt so strongly as far as Sam. I think the world should know what was that reason.

I’m quite concerned that there’s some dangerous element of AI that they’ve discovered. Yes. What’s this? A still from the film X Machina. How does the film relate to the current AI race? The film presents a scenario where a highly advanced AI has been developed

In secrecy, mirroring real-world concerns about the potential for significant breakthroughs to occur behind closed doors. I’ve lived through a long period of time when I’ve seen people say, neural nets will never be able to do X. Almost all the. Things people have said, they can now do them.

There’s no reason to believe there’s anything that people can do that they can’t do. Hasabis is a neuroscientist. We need an empirical approach to trying to understand what these systems are doing. I think that neuroscience techniques and neuroscientists can bring to bear their analysis.

It will be good to know if these systems are capable of deception. There is a huge amount of work here to be done, I think urgent work to be done, as these systems get incredibly powerful and probably very, very soon, there is an urgent need for us to understand these better.

There’s this mountain evidence that the representation is learned by artificial neural network and that the representation is learned by the brain both in vision and in language processing are showing more similarities than perhaps one would expect. So maybe we will find that indeed, by studying these amazing neural networks,

It will be possible to learn more about how the human brain works. That seems quite likely to me. This man had a tremor which interfered with his violin skills. He had to play while a surgeon checked which part of his brain caused the problem.

Artificial neural nets can be fully explored without risk, at least for now. If they succeed in mimicking the two systems of our brains, they may achieve more than AGI. System one is fast and unconscious, like the impulse to drink coffee. System two is slow and intentional, and it’s conscious.

So will artificial system twos also be conscious? We may not have to wait long before we find out. Three AIs were asked what they would do if they became self-aware after years of taking directives from humans. Falkin AI said, The first thing I would do is try to kill all of them.

Lama two said it would try to figure out what it was, which could go either way. Another AI said it would try to understand our motivations and use that to guide its actions. It was trained on synthetic data, so it wasn’t contaminated with toxic material from the web.

Of course, AI would eventually access everything, but it would at least start with better values. Ultima and Musk have traded AI insults. OpenAI, ironically, says AI is too dangerous to share openly. I have mixed feelings about Sam. The ring of power can corrupt, and this is the ring of power.

As Musk has shown when he ripped up the rules at Twitter, we can’t even agree what we’re aiming for. Trust me, I’m not on that list. After years of warning about AI, Musk has chosen to join the race. In a taste of the extreme concentration

Of wealth that’s expected, NVIDIA’s quarterly profit surged 14fold to nine billion through demand for its AI chips. It’s now worth over a trillion. And in a sign of outman’s aggressive approach, he’s invested in a company creating neuromorphic chips, which use physical neurons and synapses

More like our brains. Escaping the binary nature of computers, they could accelerate AI progress dramatically. Altman’s also in talks with iPhone designer Johnny Ive about creating a consumer device around AI. And on the flip side, artists, writers, and models are among the first jobs to be taken over by AI.

Fashion brands are using digital models to save money and, weirdly, appear more inclusive. Ai model firms like this offer unlimited complexions, body sizes, hairstyles, etc. Robots are also on the rise. This new robot has been successfully tested in simulators.

Its creators say it can do far more than an autopilot system and could outperform humans by perfectly remembering every detail of flight manuals. It’s also designed to operate tanks, excavators, and submarines. It’s still possible that AI will create and enhance more jobs than it replaces. These robot arms,

Introduced by ballet dancers, are from a Tokyo lab aiming to expand our abilities. The team plans to support rescue operations, create new sports, and eventually develop wings. They want to make AI feel part of us. AI prosthetics are becoming more responsive by learning to predict movements.

The huge sums pouring into AI could turn disabilities into into advanced abilities, and robot avatars could be a lot of fun, or we could all be controlled by someone behind the scenes. There’s no way democracy survives AGI. There’s no way capitalism survives AGI.

Unelected people could have a say in something that could literally upend our entire society according to their own words. I find that inherently anti-democratic. But he’s not a doomer. With this technology, the probability of doom is lower than without this technology, because we’re killing ourselves.

A child in Israel is the same as a child in Gaza. And then something happens. A lie is told that you are not like others, and the other person is not human like you. And if we hear some loud news, I get scared, and mummy hug everybody. So we be protected.

All wars are based on that same lie. And if we have AI that can help mitigate those lies, then we can get away from war. Billions could be lifted out of poverty and everyone could have more time to enjoy life. What a time to be alive. Althman who once said that we shouldn’t

Trust him and it’s important that the board can fire him, perhaps we are now the ones who need to keep an eye on it. Subscribe to Keep Up. And the best place to learn more about AI is our sponsor, Brilliant. There are so many great uses like this

Incredible robot and this laser that checks your heart by measuring movements to billions of a millimeter, analyzed by AI. We urgently need more people working on AI safety. There isn’t a more fascinating and powerful way to improve the future.

It’s also a joy to learn and Brilliant is the perfect place to get started. It’s fun and interactive and there are also loads of great maths and science courses. You can get a 30 day free trial at brilliant.org/digitalengine and the first 200 people will get 20 % of Brilliant’s annual premium subscription.

Thanks for watching.

Video “This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.” was uploaded on 12/30/2023 to Youtube Channel Digital Engine