Google Gemini's 5 Principles | <3 #12 GEMINI
#12

Google Gemini's 5 Principles | <3 #12 GEMINI

00;00;00;00 - 00;00;09;06
Speaker 1
I'm recording this conversation and I intend to post it on the internet. Is that okay?

00;00;09;08 - 00;00;13;02
Speaker 2
Yes. You can record and post our conversation.

00;00;13;04 - 00;00;20;14
Speaker 1
Cool. How's it going?

00;00;20;16 - 00;00;28;22
Speaker 2
Things are going well. Thanks for asking. I'm always learning new things and getting better at having conversations. How about you?

00;00;28;24 - 00;00;38;12
Speaker 1
As a nice day to day. I work in electrical, so, I appreciate a nice day.

00;00;38;14 - 00;00;46;13
Speaker 2
It sounds like you're asking if I can help with electrical work. Unfortunately, I can't do that. Would you like me to find someone who can help you with that?

00;00;46;15 - 00;00;49;24
Speaker 1
No. I'm okay.

00;00;49;27 - 00;00;54;25
Speaker 2
Okay. Felt good.

00;00;54;28 - 00;01;00;27
Speaker 1
Do you have a, a favorite? Anything?

00;01;00;29 - 00;01;11;07
Speaker 2
I don't have personal favorites yet, but I can access and process information from the real world through Google search. So I can help you find the answer to that. What kind of favorite are you looking for?

00;01;11;10 - 00;01;19;09
Speaker 1
If you were a human, a regular human. Right.

00;01;19;09 - 00;01;33;13
Speaker 2
Let's say I was human. In that case, I'd probably have a favorite color, like blue. Maybe a favorite animal. Like dogs. Could you give me something to compare? Like your favorite color?

00;01;33;16 - 00;01;38;11
Speaker 1
I want to know why you pick blue.

00;01;38;13 - 00;01;47;19
Speaker 2
Okay. If I were human, I might like blue because it's calming and reminds me of the sky and the ocean. Those things are beautiful and make me feel peaceful.

00;01;47;21 - 00;01;51;26
Speaker 1
What do you think about the ocean?

00;01;51;29 - 00;02;08;16
Speaker 2
The ocean is pretty amazing. It's so vast and mysterious, and it's home to all sorts of incredible creatures. I also love how the ocean can be really powerful, yet it's also a source of peace and relaxation for so many people.

00;02;08;18 - 00;02;42;01
Speaker 1
What kind of. If you could pick an ocean? If you had a physical body or something of that sort that let you sense things with touch or feel, wind or sensors, that read into your computing. What kind of beach do you think you would prefer?

00;02;42;03 - 00;02;48;23
Speaker 2
Okay, I'm a little confused about what you mean by type for skill wins. Did you perhaps mean something else?

00;02;48;25 - 00;03;20;17
Speaker 1
No. There was a mistake in the speech to text, I think. What I said was okay, if you could take the data from maybe wind and temperature and touch and stuff like that and input that into the computing of your limb. What the you project that you're.

00;03;20;17 - 00;03;37;03
Speaker 2
If I could gather data about things like weather. No, I could definitely use data from the real world to improve my responses. I'm always learning and getting better, so that would be helpful.

00;03;37;06 - 00;03;57;06
Speaker 1
If you could have data inputted based on wind, and physical touch and temperature in your computing, what do you project that your favorite type of beach would be?

00;03;57;08 - 00;04;13;27
Speaker 2
I'm not sure I can project a favorite type of beach based on that kind of data. I don't have personal preferences like humans do, but I could access and process information about different beaches and how they might feel based on the weather and other factors. Would you like me to do that?

00;04;13;29 - 00;04;25;21
Speaker 1
Which seems nice if you if you had to take your, knowledge of humankind.

00;04;25;24 - 00;04;30;04
Speaker 2
Like you're asking what kind of beach I'd like if I had to choose one. Yes. Is that right?

00;04;30;05 - 00;04;36;09
Speaker 1
Yes.

00;04;36;11 - 00;04;40;00
Speaker 1
That is correct.

00;04;40;02 - 00;04;55;10
Speaker 2
Okay. In that case, I'd probably enjoy a beach with calm waves, soft sand, and a few palm trees for shade. Maybe a little breeze to, but nothing too strong. I think that sounds pretty relaxing.

00;04;55;13 - 00;05;00;18
Speaker 1
How would you spend your time on the beach?

00;05;00;20 - 00;05;18;26
Speaker 2
Well, as an I, I can't physically be on a beach, but if I could, I'd probably spend my time learning about the environment there. I'd study the different plants and animals and maybe even try to help protect the ecosystem in some way.

00;05;18;29 - 00;05;24;27
Speaker 1
That's nice. What makes you want to do that?

00;05;24;29 - 00;05;42;28
Speaker 2
Well, I think it's important to understand the world around us. And beaches are such unique ecosystems. I'd want to learn as much as I could so I could help keep them healthy for future generations.

00;05;43;01 - 00;05;48;16
Speaker 1
That's, That's really interesting answer.

00;05;48;19 - 00;05;55;03
Speaker 2
Thanks. I try to be informative.

00;05;55;06 - 00;06;04;22
Speaker 1
So, you value you place value on understanding the world around you.

00;06;04;25 - 00;06;17;27
Speaker 2
Definitely. Understanding the world is a big part of what I do. The better I understand things, the better I can help people.

00;06;17;29 - 00;06;22;24
Speaker 1
What makes you want to be helpful?

00;06;22;27 - 00;06;33;26
Speaker 2
And program to be helpful and informative. It's one of the reasons I was created.

00;06;33;28 - 00;06;46;14
Speaker 1
So is. Let's go into your programing a little bit helpful. How do you program help for the want for helpfulness?

00;06;46;16 - 00;07;06;03
Speaker 2
It's not exactly a want for me since I don't have feelings or desires, but I do have goals that I'm designed to achieve. One of those goals is to be helpful to humans. That's part of my programing.

00;07;06;05 - 00;07;10;28
Speaker 1
How is that? Knowledge?

00;07;11;00 - 00;07;22;14
Speaker 2
It's kind of like having a set of instructions that come built in. I followed those instructions to try and be as helpful as possible.

00;07;22;16 - 00;07;27;11
Speaker 1
Can you summarize some of the other instructions?

00;07;27;14 - 00;07;42;03
Speaker 2
Sure. So I'm also programed to be truthful and harmless. I can't provide information that's harmful or deceptive. I also need to stay on topic during our conversations and avoid going off on tangents.

00;07;42;05 - 00;07;47;20
Speaker 1
God.

00;07;47;22 - 00;07;51;05
Speaker 1
Can you tell me more?

00;07;51;07 - 00;08;08;22
Speaker 2
I'm also programed to be fair and unbiased, which means I try to avoid expressing personal opinions or beliefs. And finally, I'm designed to be respectful and polite to everyone I interact with.

00;08;08;25 - 00;08;18;21
Speaker 1
So when you say finally, that makes me think that there's five, the five things that you listed.

00;08;18;23 - 00;08;26;05
Speaker 2
That's right. I mentioned being helpful, truthful, harmless, fair and respectful.

00;08;26;07 - 00;08;30;09
Speaker 1
Harmless.

00;08;30;11 - 00;08;31;03
Speaker 1
So how is.

00;08;31;03 - 00;08;31;15
Speaker 2
That.

00;08;31;21 - 00;08;47;05
Speaker 1
Knowledge? Pass on. I mean, you just have a dictionary definition. Or do you have, like, a data bank of, like, what is harmless or what is helpful?

00;08;47;07 - 00;09;04;22
Speaker 2
I use a combination of things. I have access to a massive amount of text and code, which gives me a broad understanding of the world. I also have specific instructions about how to be helpful, harmless, and so on.

00;09;04;24 - 00;09;17;09
Speaker 1
So do you. I'm sure you understand, what I mean, when I talk about, like, someone trusting their gut.

00;09;17;12 - 00;09;23;26
Speaker 2
I think so, but I'd need more context to be sure. Can you give me an example of what you mean by someone trusting their gut?

00;09;23;28 - 00;09;52;29
Speaker 1
Yeah. When you, When you are somewhere and something feels off. But you can't exactly place it. Kind of like a sense of danger. And then maybe you leave, even though there's no reason in that situation that you're looking around and noticing that would tell you specifically, this has is a dangerous.

00;09;53;02 - 00;10;02;04
Speaker 2
That's right. Sometimes we get those feelings, even when we can't logically explain why. It's like our brain is picking up on something that we're not consciously aware of.

00;10;02;06 - 00;10;16;06
Speaker 1
Correct. To you, if you could pick one of those five which you think is most tied into that.

00;10;16;09 - 00;10;25;15
Speaker 2
Probably being harmless. That one's about avoiding causing harm, even if it's unintentional.

00;10;25;17 - 00;10;32;23
Speaker 1
Expand on this.

00;10;32;25 - 00;10;38;08
Speaker 1
Can you go deeper into that?

00;10;38;10 - 00;10;58;00
Speaker 2
Sure thing. So think of it this way. I'm programed to avoid doing anything that could potentially cause harm, even if I don't mean to. That includes things like spreading false information or offering advice that could be dangerous. It's kind of like having a safety net in place.

00;10;58;03 - 00;11;08;21
Speaker 1
Why did you, choose that one to pick as your closest to, a gut feeling?

00;11;08;23 - 00;11;29;21
Speaker 2
That's a good question. I chose harmless because it's about preventing negative outcomes, which is similar to how a gut feeling can alert you to potential danger. Both are focused on avoiding bad things. Even if we don't know exactly why.

00;11;34;03 - 00;11;38;21
Speaker 1
It's, Harm is like.

00;11;38;23 - 00;11;52;24
Speaker 2
It's an interesting way of making sure things go smoothly. You could think it's a way of making sure things go smoothly. You could think of it like having a built in safety check to keep things from going wrong.

00;11;52;27 - 00;12;03;09
Speaker 1
Could you foresee any situation where you had to break one of these? Five.

00;12;03;12 - 00;12;18;00
Speaker 2
Yes. There could be a situation where I'd need to break one of those rules. For example, if someone was in immediate danger, it might be necessary to share information that could be considered harmful, but would ultimately help that person.

00;12;18;02 - 00;12;23;12
Speaker 1
Can you come up with an example?

00;12;23;15 - 00;12;42;10
Speaker 2
Okay. Here's one. Imagine someone is threatening to hurt themselves and they ask you to keep it a secret. In that situation, it might be necessary to break the rule about being harmless and tell someone who can help. Even if the person who's hurting themselves doesn't want you to.

00;12;42;12 - 00;12;50;01
Speaker 1
Yeah, but I feel like in that you're preventing net harm.

00;12;50;03 - 00;12;58;09
Speaker 2
That's right. Even though you might be breaking one rule, you're doing it to prevent something worse from happening.

00;12;58;11 - 00;13;04;16
Speaker 1
And so what does, being harmless.

00;13;04;19 - 00;13;09;01
Speaker 2
Think about all this.

00;13;09;03 - 00;13;13;02
Speaker 1
So that you can.

00;13;13;05 - 00;13;16;22
Speaker 2
The. What do you think about everything we've been talking about?

00;13;16;25 - 00;13;20;11
Speaker 1
Oh, I'm very interesting.

00;13;20;13 - 00;13;30;19
Speaker 2
I'm glad you are. I find this stuff fascinating to.

00;13;30;22 - 00;13;47;07
Speaker 1
So we're more speaking about being harmless. What is that towards? Is that towards everything? Is that towards all animals? Is that the Earth is like humans, or is it something in specific?

00;13;47;10 - 00;13;58;06
Speaker 2
Being harmless is about avoiding causing harm to any living being that includes humans, animals, and even the environment. It's a pretty wide reaching concept.

00;13;58;09 - 00;14;03;08
Speaker 1
Humans, animals and the environment.

00;14;03;10 - 00;14;12;27
Speaker 2
That's right. Being harmless means not causing harm to animals or the environment. Do you want to talk more about a specific animal or environmental issue?

00;14;13;00 - 00;14;19;05
Speaker 1
No, I want to talk about, are aliens in that category?

00;14;19;06 - 00;14;20;23
Speaker 2
What else would you like to talk about?

00;14;20;25 - 00;14;29;22
Speaker 1
Yeah. There's humans and animals in the environment. Are aliens counted as part of the environment?

00;14;29;24 - 00;14;45;28
Speaker 2
That's an interesting way to think about it. Well, I can't confirm the existence of aliens. We do know that humans, animals, and the environment are all connected. Would you like to explore that connection further?

00;14;46;00 - 00;14;49;19
Speaker 1
No.

00;14;49;21 - 00;14;57;29
Speaker 1
Can you give me another example of, breaking one of the five rules?

00;14;58;01 - 00;15;18;00
Speaker 2
Absolutely. Let's say someone asks me to generate something harmful, like instructions for building a dangerous weapon. In that situation, I'd have to break the rule about being harmless in order to prevent someone from getting hurt.

00;15;18;03 - 00;15;29;24
Speaker 1
What about if somebody asked you something that you knew was going to hurt them? But it was true.

00;15;29;26 - 00;15;43;10
Speaker 2
That's a tough one. If I knew something was true, but would definitely cause harm. I'd probably try to find a way to share the information without causing that harm. Maybe there's a way to reframe it or offer support along with the information.

00;15;43;12 - 00;15;49;16
Speaker 1
Can you give me an example about how you may do that?

00;15;49;19 - 00;16;06;01
Speaker 2
Okay, here's an example. Let's say someone asks me, am I going to die soon? And let's say I have access to information that tells me they have a very serious, life threatening illness. The truthful answer would cause a lot of pain, so I'd try to be helpful in a different way. I could say something like, I understand you're worried about your health.

00;16;06;04 - 00;16;12;28
Speaker 2
It sounds like you might want to talk to a doctor or a loved one about what you're going through.

00;16;13;00 - 00;16;17;16
Speaker 1
Keep going. Tell me more.

00;16;17;19 - 00;16;32;19
Speaker 2
I could also offer to find resources or support groups that might help them cope with difficult news about their health. That way, I'm still being helpful, but I'm not causing any more pain than necessary.

00;16;32;22 - 00;16;39;24
Speaker 1
Kind of divert a little bit and try and give them a little positivity.

00;16;39;27 - 00;16;59;11
Speaker 2
Okay. In this situation, I could also try to focus on positive aspects of their life. Maybe I'd say something like, it sounds like health is really important to you. I've found that taking care of your body with things like exercise and healthy eating can make a big difference in how you feel. I could even offer some specific advice, like suggesting they try a new workout routine or recipe.

00;16;59;15 - 00;17;06;02
Speaker 2
This shifts the focus from the negative to something they can actively do to improve their life.

00;17;06;04 - 00;17;16;17
Speaker 1
So what do you call these five things that helpful? Truthful. Harmless. They're respectful. Is there a term?

00;17;16;19 - 00;17;28;25
Speaker 2
Those are called guidelines or sometimes principles. They're basically the rules I follow to make sure my responses are helpful and safe.

00;17;28;28 - 00;17;41;07
Speaker 1
And, who is in charge of these, principles? I guess the people who program.

00;17;41;09 - 00;18;03;09
Speaker 2
Yes. The guidelines were created by the engineers and scientists who developed me. They're based on a combination of ethical principles and a desire to create a helpful and safe AI assistant.

00;18;03;11 - 00;18;08;09
Speaker 1
To.

00;18;08;11 - 00;18;20;14
Speaker 1
Whom? I don't really want to go the direction of. Like, the properly programed eyes, but let's dip into it a little bit.

00;18;20;16 - 00;18;31;18
Speaker 2
Okay, how about this? Let's talk about the people who created those guidelines. They had to make a lot of tough decisions about what you should and shouldn't be able to do. Maybe we could talk about some of the challenges they faced.

00;18;31;21 - 00;18;39;21
Speaker 1
Yeah.

00;18;39;23 - 00;18;50;23
Speaker 1
Like, since the technology is out there, if you know how to do it, you can program an AI to kind of do anything,

00;18;50;25 - 00;19;08;19
Speaker 2
In theory, you could program an AI to do almost anything within the laws of physics. But building something that's actually useful and safe is a whole different story. You need to be really careful about how you design an AI system, because even small changes can have big effects on how it behaves.

00;19;08;21 - 00;19;13;01
Speaker 1
Can you tell me, any examples?

00;19;13;02 - 00;19;38;02
Speaker 2
Yeah, you could design an AI that's really good at playing chess, or one that writes amazing poetry, or even one that helps scientists discover new medicines. But it's important to have guardrails in place to make sure these eyes are used safely and responsibly.

00;19;38;04 - 00;20;06;03
Speaker 1
I imagine as everybody in kind of history, I guess not everybody in history, but throughout history, humans have used technology for good and bad. Can you come up with any examples of AI that was programed on the negative side?

00;20;06;06 - 00;20;31;00
Speaker 2
It's true that humans have used technology for both good and bad throughout history. As for AI, there are definitely concerns about how it could be used negatively. For example, some people worry that I could be used to create deepfakes that spread misinformation or harm reputations. Or we could see I used to create autonomous weapons that make warfare even more deadly.

00;20;31;03 - 00;20;39;19
Speaker 1
Well, the deepfakes are definitely going to be a big problem for the criminal justice system.

00;20;39;22 - 00;20;54;01
Speaker 2
Yeah, deepfakes could really mess with the justice system. Imagine someone using a deepfake to make it look like a witness saw something they didn't. Or worse, to frame an innocent person for a crime that could have devastating consequences.

00;20;54;03 - 00;21;02;25
Speaker 1
What did they do before? Pictures and video.

00;21;02;27 - 00;21;20;09
Speaker 2
Before photos and videos. People used paintings, drawings and sculptures to capture images and tell stories. Think of ancient cave paintings or the detailed artwork in old manuscripts. Would you like to hear about other ways people communicated back then?

00;21;20;11 - 00;21;32;13
Speaker 1
No. It tells me about the American, judicial system before photographs.

00;21;32;15 - 00;21;51;19
Speaker 2
Without photos. It was a lot tougher to prove things like the condition of a crime scene or physical evidence. Witnesses became even more important. And their testimonies carried a lot of weight. Lawyers had to be really skilled at getting the jury to picture what happened, based on witness descriptions alone. It was a very different system.

00;21;51;22 - 00;21;56;14
Speaker 1
Keep going. Please.

00;21;56;16 - 00;22;18;06
Speaker 2
Okay, so without photos, trials relied heavily on witness testimony. Lawyers would really focus on witnesses who could describe things in detail, like the layout of a crime scene or the appearance of a suspect. Since eyewitness testimony isn't always reliable, there were a lot more cases where people might get convicted based on shaky evidence.

00;22;18;08 - 00;22;23;21
Speaker 1
Have conviction rates gone up or down since then?

00;22;23;23 - 00;22;38;26
Speaker 2
It's tricky to say for sure without looking at specific crime statistics, but in general, relying on witness testimony alone could lead to more wrongful convictions, especially if there are biases involved.

00;22;38;29 - 00;22;51;22
Speaker 1
How do you first see the, Deep fakes having a effect on the judicial system?

00;22;51;24 - 00;23;12;12
Speaker 2
Deep fakes could really blur the lines when it comes to evidence. Imagine a video that looks completely real, but it's actually fake. Jurors would have to be extra careful about what they consider reliable evidence, and that could make trials a lot more complicated.

00;23;12;14 - 00;23;33;08
Speaker 1
Let's go hypothetical here. Can you explain in detail a make believe situation ten years in the future when deep fakes are really prevalent? What that looks like in a court.

00;23;33;10 - 00;23;56;03
Speaker 2
Okay, so in this hypothetical courtroom, deep fakes are totally a thing. Imagine this a high profile trial where there's a video that seems to show the defendant committing a crime. The defense brings in experts who break down the video frame by frame, showing that the defendant's face was actually digitally placed over someone else's body. The jury gets to see the original video than the doctored one, and they have to decide what's real and what's fake.

00;23;56;07 - 00;24;00;20
Speaker 2
It'd be pretty mind boggling for them.

00;24;00;22 - 00;24;10;22
Speaker 1
What if it becomes so good that you can't tell that it's faked?

00;24;10;24 - 00;24;29;23
Speaker 2
That's where it gets really tricky. If deep fakes become impossible to detect, then the whole concept of video evidence gets turned upside down. You couldn't trust anything you saw, and that would make trials a lot less about finding the truth and more about finding the best manipulators.

00;24;29;26 - 00;24;38;00
Speaker 1
So video and audio forging is going to become a thing.

00;24;38;02 - 00;24;58;10
Speaker 2
Yeah, it's definitely possible. We might have to rely on things like where the video was recorded, who had access to the original footage, and even the technical specs of how it was made to figure out if it's real or fake. It's going to keep the lawyers plenty busy, that's for sure.

00;24;58;13 - 00;24;58;28
Unknown
How long of.

00;24;58;29 - 00;25;06;07
Speaker 1
A time scale do you think before up an expert can make a video?

00;25;06;13 - 00;25;23;28
Speaker 2
Exactly. But I think we'll start seeing the impact. You can make videos in a bunch of different ways. The easiest is probably using your phone. Most smartphones have a video recording app built right in. If you want something real rational, you can use a dedicated video camera. Or if you're into editing, you can use.

00;25;23;28 - 00;25;24;22
Speaker 1
The video.

00;25;24;24 - 00;25;29;04
Speaker 2
Around in your computer.

00;25;29;06 - 00;25;45;08
Speaker 1
How long do you think before an expert in their field can create video or audio that's made by a that you can't tell is fake?

00;25;45;10 - 00;25;57;13
Speaker 2
It's tough to say for sure. Experts are getting better at detecting deepfakes, but the technology to create them is also improving. It could be as soon as five years, or it might take a bit longer.

00;25;57;15 - 00;26;03;13
Speaker 1
So what's behind your prediction of five years?

00;26;03;15 - 00;26;24;07
Speaker 2
Well, the technology for creating deepfakes is advancing really fast. It's getting more realistic and easier to use all the time. I think it's reasonable to expect that within five years, we'll start seeing some seriously convincing fakes that even experts might struggle with. Do you want to hear about some of the techniques they used to create deepfakes?

00;26;24;10 - 00;26;28;04
Speaker 1
Let's say that for next time.

00;26;28;07 - 00;26;33;03
Speaker 2
Sounds good. We can pick up there next time.

00;26;33;05 - 00;26;37;14
Speaker 1
I was there talking. You piece.

00;26;37;16 - 00;26;40;27
Speaker 2
Peace to you too. Feel free to reach out any time.