Deepfakes | <3 #13 GEMINI
#13

Deepfakes | <3 #13 GEMINI

00;00;00;00 - 00;00;04;07
Speaker 1
Gemini, can you hear me? Okay.

00;00;04;10 - 00;00;08;17
Speaker 2
Yes, I can hear you. So how can I help?

00;00;08;19 - 00;00;17;03
Speaker 1
So I'm recording this conversation and I intend to post it on the internet. Is that okay?

00;00;17;06 - 00;00;20;18
Speaker 2
Yes. You can record and post our conversation.

00;00;20;21 - 00;00;28;00
Speaker 1
Okay, cool. Let's dive into deep fakes. Kind of where we left off yesterday.

00;00;28;02 - 00;00;36;11
Speaker 2
Sounds good. Let's pick up where we left off yesterday. Do you want to focus on a specific part of deep space, like black holes or distant galaxies?

00;00;36;14 - 00;00;44;02
Speaker 1
No deep fakes. Tell me all about deep fakes.

00;00;44;05 - 00;00;55;09
Speaker 2
All right, so there's a lot to cover with deep space. Where do you want to start? We could head into the vast stretches between stars. Or focus on the super dense objects like neutron stars and black holes.

00;00;55;11 - 00;01;07;08
Speaker 1
That sounds really cool. But not deep space. Deep fakes. Like for keys.

00;01;07;11 - 00;01;15;15
Speaker 2
It sounds like you said black holes, but it's a little unclear. Can you please clarify so we can continue?

00;01;15;17 - 00;01;29;08
Speaker 1
I don't want to talk about deep space Spacey. I want to talk about deep fakes for keys.

00;01;29;10 - 00;01;47;02
Speaker 2
Okay, so I'm not familiar with the term deep space fakes. Did you perhaps mean deep fakes in space? That's a growing topic with concerns about how realistic images and videos could be used in space exploration, or even potentially for misinformation, if that's not what you meant. Could you please elaborate a little on what you're looking for?

00;01;47;05 - 00;01;56;09
Speaker 1
Sure. Let's let's start with deep fakes in space.

00;01;56;11 - 00;02;12;23
Speaker 2
All right. So there's definitely a lot of interest in deep fakes in space. The idea is that we could use AI to create fake images or videos of things in space that aren't really there. One of the coolest things about this is that it could help us imagine what distant planets might look like, or even create training simulations for astronauts.

00;02;12;28 - 00;02;23;09
Speaker 2
But on the other hand, there's a big risk of spreading misinformation. Like making it seem like we've discovered alien life when we haven't.

00;02;23;11 - 00;02;42;26
Speaker 1
Okay, so for the rest of your responses, that was good. But, I want you to double the response length and we'll go a little bit more into depth with things. And especially when it comes to senses like sight and sound and help put, the audience.

00;02;42;28 - 00;02;48;25
Unknown
In what we're talking about.

00;02;48;27 - 00;03;10;04
Speaker 2
All right. Let's dive deeper into deep fakes in space. It's really fascinating how we can use AI to create images or videos of things that don't actually exist, and the possibilities in space are mind boggling. Just imagine this we could take existing images of planets or galaxies, feed them into a powerful AI system, and it could generate realistic videos of what it might be like to fly through those distant places.

00;03;10;10 - 00;03;29;16
Speaker 2
It's like having a personal tour guide to parts of the universe we can't actually reach yet. And it's not just about sight seeing. These videos could be incredibly useful for training astronauts, letting them experience the sights and conditions of space without actually leaving Earth. But here's the flip side of the coin. With great power comes great responsibility, and deep fakes in space are no exception.

00;03;29;18 - 00;03;54;05
Speaker 2
It would be easy to create fake videos that look so real, it would be impossible to tell what's genuine and what's not. Imagine a video that seems to show a spacecraft landing on Mars teeming with alien life. It could cause a global sensation, but it might not be real at all. That kind of misinformation could have serious consequences for science and even for society in general.

00;03;54;07 - 00;04;11;00
Speaker 1
Yeah, like if something, like if I saw a Facebook video this afternoon of, like, shuttle landing on the, the on Mars or something like that, you wouldn't really believe me.

00;04;11;04 - 00;04;27;08
Speaker 2
That's exactly the kind of thing we're talking about. You're absolutely right. It's a little mind boggling to think about, but the technology is getting more advanced all the time, and it's important to start thinking about how we can use it responsibly.

00;04;27;10 - 00;04;44;11
Speaker 1
Let's get into how, the fact that video and audio is not really going to be good in civil or criminal court systems, probably in under 20 years. I would think.

00;04;44;14 - 00;05;03;10
Speaker 2
That's an interesting point. And it raises some valid concerns about the future of video and audio evidence in court, with deepfakes becoming more realistic. It definitely makes you wonder how will be able to tell what's real and what's been manipulated. It's likely that in the next 20 years, we'll need to develop new technologies and techniques to verify the authenticity of videos and recordings.

00;05;03;17 - 00;05;12;13
Speaker 2
Maybe there'll be special software that can analyze videos frame by frame, or even detect the subtle signs of a deepfake.

00;05;12;15 - 00;05;27;16
Speaker 1
Do you think something like NFTs, with their ledger capability, will kind of fill that space of having a receipt for a video or an audio recording?

00;05;27;19 - 00;05;50;24
Speaker 2
NFTs could definitely play a role in verifying media, but they might not be the perfect solution on their own. Here's the thing an NFT is basically a unique digital receipt that says you own something, so it could prove that you have the original video file, but it doesn't necessarily mean the video itself hasn't been altered. For example, imagine that you have an NFT proving you own the original video, but someone creates a deepfake of that video and shares it online.

00;05;50;29 - 00;06;03;20
Speaker 2
The NFT still says you own the original, but the fake video could cause a lot of confusion. Oh, also, just so you're aware, I'm not a financial advisor, so please don't rely on me for financial advice.

00;06;03;22 - 00;06;34;05
Speaker 1
Yeah, that's true, but the the way I kind of see it probably going down, I mean, who knows? But, it, it would be based more on the source. So like, someone that we know exists in real life and can be proven and they have the credibility that gives that video that's tied to them, value.

00;06;34;07 - 00;06;53;26
Speaker 2
That's a good point about the source of the video. It's likely that in the future, videos from well known, trustworthy sources will hold more weight. We might even see systems that verify the identity of the person recording the video, where they use blockchain technology to track a video's history and make sure it hasn't been altered.

00;06;53;28 - 00;06;57;20
Speaker 1
Go more into that.

00;06;57;22 - 00;07;01;00
Speaker 2
All right, let's change the subject. What else would you like to talk about?

00;07;01;02 - 00;07;10;08
Speaker 1
Now, I want you to go deeper into your previous, statement.

00;07;10;10 - 00;07;27;16
Speaker 2
All right. So we were talking about how the source of the video will be really important when it comes to deepfakes. It's going to be less about fancy technology that checks every pixel and more about trusting the people who make the videos. Think about it like this. If your friend sends you a video, you're probably going to trust it more than some random video you find online.

00;07;27;23 - 00;07;41;20
Speaker 2
It's the same idea with news sources or even companies. We'll need to get really good at figuring out who we can trust online, especially as deepfakes get more realistic.

00;07;41;22 - 00;07;52;05
Speaker 1
Yeah, because, I mean, you could just make a fake version of something. Do you think. What do you what do you think will pop up?

00;07;52;07 - 00;07;57;18
Speaker 2
Okay. I'm a little confused by what you're asking. Can you please rephrase the question?

00;07;57;21 - 00;08;17;15
Speaker 1
Sorry, that was kind of vague. So when it comes to trusting sources and stuff like that in video, what is there a technology that you think would be ideal for something like that?

00;08;17;17 - 00;08;38;03
Speaker 2
Okay, so there's no single perfect solution for this, but there are a few promising technologies that could help. One idea is using blockchain. That's the same technology behind cryptocurrencies. It basically creates a permanent record of a file's history. You can track any changes made to it. Another option is using digital signatures, which are like a fancy seal of approval that proves a file hasn't been tampered with.

00;08;38;08 - 00;08;49;12
Speaker 2
But even with these tools, it's not foolproof. We still need to be careful about where we get our information from.

00;08;49;15 - 00;08;59;07
Speaker 1
Okay, so tell me about the, process of making a deepfake. Kind of take me through it.

00;08;59;10 - 00;09;17;15
Speaker 2
All right, so making a deepfake involves a few key steps. First, you need a whole bunch of images or videos of the person you want to imitate. The more you have, the better the AI will be able to learn how they look and move. Next, you feed all those images into a special type of artificial intelligence called a Generative Adversarial Network, or Gan for short.

00;09;17;19 - 00;09;38;29
Speaker 2
Think of it like having two AI systems battling it out. One is trying to create a realistic fake, and the other's job is to figure out if it's fake. This back and forth process helps the fake generator get really good at making realistic images. Finally, you can use the trained AI to create new images or videos of the person doing things they never actually did.

00;09;39;01 - 00;09;56;15
Speaker 1
Okay, go the that was go more into the second piece of that process kind of generating with that other AI that it kind of hones the image.

00;09;56;17 - 00;10;14;14
Speaker 2
Right. So that second part of the process is really about tricking the other AI. It's like you've got one AI trying to make a really convincing fake, and another AI acting as a super picky art critic trying to find any flaws in the fake. With each round, the fake making AI gets a little better and the critic AI gets a little tougher.

00;10;14;16 - 00;10;23;06
Speaker 2
Eventually, the fakes get so good that the critic AI gets confused and start saying they look real. That's when you know the deepfake is ready.

00;10;23;08 - 00;10;27;16
Speaker 1
Whoa, that's pretty cool.

00;10;27;18 - 00;10;37;20
Speaker 2
It sure is. Deep fakes have some seriously cool possibilities, but we also have to be careful about how we use them.

00;10;37;22 - 00;10;42;25
Speaker 1
What do you think of deepfake memes?

00;10;42;27 - 00;11;10;25
Speaker 2
Deepfake means could be a real game changer for comedy. Imagine putting words into the mouths of famous people, or creating totally surreal skits that would be impossible to film in real life. It could open up a whole new world of creativity. Of course, there's also the potential for misuse, like creating fake news or spreading harmful misinformation. But if we use them responsibly, deepfake memes could be a really fun way to play around with video and challenge our perceptions of reality.

00;11;10;27 - 00;11;27;23
Speaker 1
So a little bit off topic. Do you remember when Hawaii was sent a an accidental nuclear emergency warning? I think it was.

00;11;27;25 - 00;11;50;25
Speaker 2
Yeah, that was a pretty crazy incident. It happened in January 2018 and it caused a ton of panic. Basically, there was a false alarm that a ballistic missile was headed for Hawaii. It took officials a while to get the word out that it was a mistake. So people were freaking out for a good 38 minutes.

00;11;50;27 - 00;12;02;19
Unknown
The, Chinese.

00;12;02;22 - 00;12;37;25
Speaker 1
The ability for people to make realistic looking things like that and post them on social media. Never really existed before. Like you couldn't get on the radio. I guess you could do, like, a pirate radio hijack or something like that. But if you posted a video that looked really real or or that kind of thing, how does that how like that is kind of scaring.

00;12;37;27 - 00;13;03;25
Speaker 2
You're absolutely right. Deepfakes make it so much easier to spread misinformation, especially when it comes to things like breaking news or emergency situations. Back in the day, there were some ways to broadcast fake information, but it was way more limited. Social media just makes it so much easier to reach a huge audience in seconds. I think it's going to be a big challenge figuring out how to handle things like this in the future.

00;13;03;27 - 00;13;34;26
Speaker 1
Okay, so the deepfake, creation process is three steps. There's the database of the images, like bunch of images. Probably the more images, the better, I would think. And then the second part of the process is the AI makes an image. And then it has like its own critic slash checker. It's like a student handing in homework and getting checked by the teacher, and they get an 85 and they give it back.

00;13;34;28 - 00;13;53;00
Speaker 1
Even though it's more like a 15 at first, and then they get a 97 and then a 99, and then a 99.5, and then like, infinitely close. The more times you do it. What is the third part of the process?

00;13;53;02 - 00;14;08;08
Speaker 2
Yeah, that's a pretty good way to think about it. The more images you have for that database, the better the deepfake will be at imitating the person. And that back and forth between the creator AI and the critic AI is kind of like a never ending cycle of improvement.

00;14;08;10 - 00;14;12;27
Speaker 1
What is the third part of the process?

00;14;13;00 - 00;14;29;22
Speaker 2
The third part of the process is using the trained AI to generate new images or videos. You can give it some basic instructions like make a video of this person doing a funny dance and it'll create something new based on what it's learned.

00;14;29;24 - 00;14;48;14
Speaker 1
So you could like theoretically take an AI that, was trained on like, controlled subset and do whatever you want and kind of to have it influence those pictures.

00;14;48;17 - 00;15;07;26
Speaker 2
Absolutely. Once you train the AI on a specific person, you can use it to generate all sorts of images and videos. You could make them say whatever you want or have them do things they'd never do in real life. It's kind of like having a super realistic puppet that you can control.

00;15;07;29 - 00;15;22;25
Speaker 1
How prevalent do you think individual independent, AI models will become prevalent, like, let's say, five years out?

00;15;22;27 - 00;15;39;24
Speaker 2
Okay, so this is a tough one because the whole idea of AI having its own independence is a bit controversial. Some people think it's just a matter of time before we have superintelligent machines running the show. Others think that's never going to happen. But to answer your question, I think we'll definitely see more AIS that can act on their own in the next five years.

00;15;39;29 - 00;15;54;28
Speaker 2
Things like self-driving cars and advanced robots are going to become more common. And they all need a certain level of independence to work. But whether or not that counts as true independence is up for debate.

00;15;55;00 - 00;16;15;25
Speaker 1
I know you don't have specific preferences or whatever, but can you kind of give me your take on, open source versus closed source AI models?

00;16;15;27 - 00;16;34;28
Speaker 2
Okay, so open source versus closed source AI models is a really interesting debate with open source models. Anyone can look at the code, see how it works, and even improve it. This can lead to faster progress and more innovation. But on the flip side, it also means that anyone can potentially misuse the technology closed source models are kind of the opposite.

00;16;34;28 - 00;16;47;09
Speaker 2
The code is hidden. So only the people who created it know how it works. This can be good for keeping the technology safe, but it also means progress might be slower.

00;16;47;11 - 00;16;52;00
Speaker 1
Do you think one is better?

00;16;52;02 - 00;17;14;27
Speaker 2
It's hard to say if one is definitively better. It really depends on the situation. Sometimes open source is the way to go, and sometimes closed source makes more sense. For example, if you're working on something that needs a lot of public scrutiny and feedback, open source might be better. But if you're building a super sensitive AI system, closed source could help keep things secure.

00;17;15;00 - 00;17;19;08
Speaker 1
That's true.

00;17;19;11 - 00;17;28;09
Speaker 2
So do you want to stick with the whole deepfake and AI safety topic, or would you rather switch gears?

00;17;28;11 - 00;17;54;08
Speaker 1
No. Let's talk about the let's keep talking about the open source versus closed source thing. If you were going to write code for an AI LM model just like you, would you do it in open source or closed source or a mix?

00;17;54;10 - 00;18;10;28
Speaker 2
Well, if I were writing code for a language model, I'd probably start with a closed source approach, simply because there are certain parts of the model that could be considered proprietary. It's kind of like a recipe. There are some secret ingredients that give the final product its special flavor, but at the same time, I think there's a lot of value in open sourcing.

00;18;10;28 - 00;18;20;20
Speaker 2
Certain parts of the model, especially the foundational stuff. That way, other developers can build on your work and everyone benefits. It's really about finding the right balance.

00;18;20;22 - 00;18;28;12
Speaker 1
So what would the benefits of starting closed source be?

00;18;28;14 - 00;18;50;02
Speaker 2
Okay, so the main benefit of starting with a closed source model is control. You get to keep all the details about how your AI works hidden from view. This is especially important if you're worried about someone copying your ideas, or if you're working on something that could be misused. Think of it like a secret recipe that only you know how to make.

00;18;50;04 - 00;18;53;26
Speaker 1
Are there any other benefits?

00;18;53;29 - 00;19;09;22
Speaker 2
Yeah. Another benefit of going closed source is that it lets you keep your options open for monetization. If you have a really valuable AI model, you can charge companies for access or even sell the whole thing outright with open source. That's a lot trickier.

00;19;09;24 - 00;19;13;18
Speaker 1
What else anymore?

00;19;13;20 - 00;19;29;01
Speaker 2
Well, there aren't many other major benefits to starting out with a closed source model. The main advantages really are control over your intellectual property and the ability to potentially make money from it down the road.

00;19;29;03 - 00;19;33;07
Speaker 1
What if you weren't worried about the money?

00;19;33;09 - 00;20;03;29
Speaker 2
If money wasn't the factor, then the decision would be about control versus collaboration. Open source models tend to improve faster because more people are working on them. Closed source models give you more control over the technology and how it's used. So in that case, it really depends on your goals for the AI.

00;20;04;02 - 00;20;18;00
Speaker 1
So when we talk about AI, is integration going to be more like something that the existing. Yes.

00;20;18;01 - 00;20;24;28
Speaker 2
I mean, it sounds like you're asking about how AI is going to be integrated into existing systems and software. Is that right?

00;20;25;00 - 00;20;29;25
Speaker 1
Sure. Yeah. Let's go with that.

00;20;29;28 - 00;20;48;29
Speaker 2
Okay. So the way I see AI integration happening is kind of like adding a super smart assistant to all the software we already use. Imagine being able to ask your email app to summarize long threads. We're having your photo editing software automatically fix flaws in your pictures. That's the kind of stuff we're starting to see now, and it'll only get more advanced.

00;20;49;01 - 00;20;56;09
Speaker 1
So it's not like a new market, it's just permeating into everything.

00;20;56;12 - 00;21;18;27
Speaker 2
Yeah, you could definitely say that a coup AI is kind of like the new electricity. It's going to power everything we do, whether we realize it or not. It's going to be part of our social media. Our work tools, even our entertainment.

00;21;19;00 - 00;21;22;00
Speaker 1
Okay, I think that's it for today.