Betatalks the podcast
35. Understanding and controlling AI, using NLP & the buzz around GPT-3 - with Eve Pardi
In this episode, we speak with Eve Pardi. She is a skilled Data Scientist and AI Engineer, the owner of AI42, and a board member of Global AI Community. Eve explains why she started AI42, what their mission is, and how AI is still something new and magical to most engineers. How there's still a disconnect between AI engineers and "normal" developers. We talk about the importance of understanding and controlling AI systems. So we wonder, how can we understand AI in general? What should we keep in mind when using NLP? How can we create fair, unbiased, and non-discriminatory intelligent models? And how you need to ask yourself: am I really able to tell what my model is doing? Furthermore, there has been a lot of buzz around GPT-3, so we ask Eve why it's a big deal and what we can expect in the future.
About this episode, and Eve Pardi in particular: you can find @EvePardi on Twitter or check out her website codewitheve.azurewebsites.net.
About Betatalks: have a look at our videos and join us on our Betatalks Discord channel
Episode transcription
00:00 - Introduction
03:21 - Friend of the day
04:33 - Talking about certificates
08:46 - Community to introduce people to AI
13:35 - The importance of knowledge sharing
17:44 - What’s next for the AI community?
23:21 - What is interpretability and how does it affect the decision-making of neural networks?
29:36 - How do you know if you’re making the right decision?
36:28 - Totally random question
40:40 - Why GPT 3 is a big deal
45:19 - You can’t use a tool without knowing it
51:36 - Closing
Introduction – 00:00
Rick
Hey there, welcome to Betatalks, the podcast in which we talk to friends from the development community. I'm Rick.
Oscar
And I am Oscar. Hey, Rick, I saw you are busy. Busy getting all kinds of certifications.
Rick
Yes. This has been what seems to be a long awaited dream of at least some specific people in our company, that I actually get the certifications that I need to get. So yeah, I got I think I got a couple in two months time. And so now I'm just as your architect experts, and I have to Data Platform thingy and the AC four hundreds, okay, yeah, this.
Oscar
The same person, I think asked me to do all of them. And I also started but like he I saw you taking off like four or five in a week or something. So I'm a bit behind. But this is the the Azure expert, what is the path? Yeah, exactly.
Rick
The Azure architect, expert. And I think you need to have at least ac 900. For that one's fundamentals. And then they take 104 and 305. Now, so there's a lot of stuff changing. And of course, that's because I mean, Azure is continuously expanding and changing. But then also around the certification paths, there are quite a few changes. And then now we have, like I said, az 104. And AZ 305 To combine into being an Azure architect expert,
Oscar
you also need to renew some of them right already within a year, because it's changing so fast.
Rick
Yeah, I think before the end of the year, I need to renew my 204, which is Azure Developer Associate. So yeah, you need to keep up to make sure that you're still up to par,
Oscar
Still relevant.
Rick
Wow. If it's that easy to make sure I'm still relevant. I'm going there.. One thing that I saw this week is, as you know, we moved house like a year ago. So every once in a while, we pull out the boxes that we still haven't unboxed. And then I found this really cool thermos of Starbucks that I bought in Seattle when we were there, for the 2019 build. So I sent an image of it to Luc, one of our directors, because he was with me on that trip. And then he sent a video that we actually have on YouTube, about Luc and I go on to build and talking about it. And there are some bloopers in there, which are pretty interesting. But also one thing that, that I found interesting back then, and I still find interesting now, but I find myself not having thought about that a lot is the fact that we have all we need to have responsible responsibility around using things like AI. So responsible AI is really a thing. And I know that Luc is really into responsible AI.
Oscar
Yeah, not only that, like he's like responsibility everywhere. Where we work. Yes. So this is definitely a topic.
Rick
Yeah. But then then, somehow it's funny that that came up this week. Yeah. While we now have a very special guest coming in. Yeah, who does a lot of things about AI and ML. So.
Friend of the day - 03:21
Oscar
Rick, who is our guest today?
Rick
Our friend of the day is Eve party. Eve is a skilled data scientist and AI engineer. She has been working close to Microsoft since her first years of university as a Microsoft student partner first. And now as a Microsoft most valuable professional. She has worked with Azure infrastructure, then mobile and web development, and today she has expanded her focus with data science, and machine learning services provided by Microsoft. She's a familiar face as a speaker at conferences, meetups and other community events, sharing her passion for AI, and how it can be used to improve quality of life around the world. And the board member of global AI community and co-owner of ai 42. Eve also writes articles about our projects and researchers and his mentoring and supporting people who would like to start in the field of data science in AI. Welcome Eva.
Eve
Hi, everyone. How are you?
Rick
We're fine. Thanks. How are you?
Eve
I'm good. Thank you. And also congratulations on the new certification.
Talking about certificates - 04:33
Rick
Thanks, thanks. Yeah, you need you need to make sure that well. It's always a bit. I think of it as both two sides, right. On one hand, I think it's all about the fact that you know what you need to know and that you can help customers in their journey. And that doesn't necessarily mean that you need to have the certificate for it. On the other hand, it does help that you can actually show them that you've proven somewhere it helps you to at least know the basics about a certain set of service,
Oscar
You will pick up also things that you didn't use, and that is people like, we are going from project to project and multiple clients at the same time, but you will normally evolve the things that you know, and start reusing them. And these, I found that I just did 204, to find some service, like, oh, I actually didn't know that, I'm gonna use this. So there's, it is a nice, forced way to have to discover some new things.
Rick
Well, that's true.
Oscar
How's that for you Eve in the data science corner of our world,
Eve
I often feel the same. And actually, that Azure architect certificate is something on my bucket list as well, at some point, but first, I think what I was always feeling like is that these certificates that you get by doing an exam for the Microsoft certificate, it was often difficult, you know, to be prepared, in a way that it is not necessarily only practical, but you but it's not. And it's not only technical, and not only theoretical, but the biggest aim is always that that you should be able to, to use it your use the services yourself, for example. Now I'm participating at GTA session for dbx 100, which is the data scientist associate where we are talking through what should the exam and the training include? How should it be sort of built up. And, and we talked about that a lot that we're actually aiming to, to provide these certificates to those who are in the end, able to consult to clients. So they you know, have the practical background, they have a lot more knowledge than, than just someone who is working with these things in general. So a certificate in my view, and we always put it in our signatures as well as consultants, that that we are showing that we are providing the information that that hey, we are certified for this and that topic, because we are good at you know, consults on this, because we have the knowledge we have the experience. And so this is a good feedback to have these around, as you said before,
Oscar
Yeah, definitely. But I also see a bit of a difference because like ten maybe more years ago, sometimes these, these exams would be not, didn't feel relevant. And was like, yeah, just get it and read a few brain dumps, and I'll remember what to answer.
Rick
I think that's, that's an important thing.
Oscar
It changed.
Rick
Yeah, 10 to 15 years ago, I wouldn't say anybody. But if you were determined to get one of those certificates, you could. While now it feels a bit more like you actually need to have worked with the services to at least have enough experience to pass that test. And
Oscar
also the self-paced path on for instance, learn is pretty awesome. If you go through that, you know, a lot.
Rick
Yeah, that's true. That's true. Eve you are an MVP for AI, right? Yes. So and then you also have ai 42, which is your organization, let's first dive into AI 42 a bit because you work for Avanade? If I'm not mistaking. But then also you have ai 42. So how did the two combine?
Community to introduce people to AI - 08:46
Eve
I wouldn't say them being combined. Because as at Avanade. I am a consultant to enterprise clients, and AI 42 Is my organization that is aiming for teaching newcomers of the field. So for example, someone who would like to learn about the basics and just get started, they haven't ever heard about the mathematical backgrounds, the development background, and all days of AI and machine learning and data science. In general. They can just come to AI for it to check out or recordings and we guide them through this journey by providing lectures from experts from all around the world. This can include experts from Microsoft as well, just to say something bigger here, but we also have a lot of speakers who, who are MVPs or, or, or enterprise experts, and so on. And so we have a lot of amazing people around us who, who help us in this journey and make sure that our audience is learning something from us. And this organization we started One and a half years ago or so. And actually, we will have our end of year panel talk where we are going to talk about with our previous speakers of this semester, like, how would you sell and propose data science and AI project to clients? What are the focus points? And what, what is the best way of showing the clients the best solutions?
Rick
So it's, it's more of a community to get people introduced into AI, right?
Eve
Yeah, we can talk about it as a community as well, because then our lecture after our lectures, we always tell our audience that, please go ahead and reach out to our speaker, because they can guide you further or give you more resources, and so on and so on. So, yeah, we became community. Now, I think I can, I can say like that.
Oscar
You touched upon, like selling AI or selling your solution to client how to talk about that. It is, what I see I'm working with a couple of clients that are they have some machine learning teams or doing something specific. And there's a real, real push to get these solutions out. But I do see a kind of a disconnect between the engineering teams doing their normal dev work, an AI engineer making something and having a model trained it and then getting that pipeline to connect those two. I'm always thinking like, is there like, you needed almost a team in the middle or something to make it work to actually combine those two? Because it's either devs need normal devs need to step up knowing this stuff a bit more? Because it's a lot of engineering to actually get a model out in skill and have it working in a production environment.
Eve
Are you saying that, for example, if there is a team of developers and then we bring in some AI engineers, how do they work together?
Oscar
I've seen a couple of projects fail on that. Like, that's what I'm saying in the end, because they are not speaking the same language often. For a lot of devs in the field, this is new magic stuff. And yeah, like it's not predictable for them what it's going to do and how to actually enable the thing to work and connect it in a real-life system. I think it needs a lot of thinking around it.
Rick
Yeah, I don't, I think not even only devs. I mean, there's quite a few companies right now that I'm helping in their migration towards Azure towards the cloud and towards using cloud services. That actually all of them, say, in the future, we would like to do something with AI. While if you ask them now, what do you think it's going to be? They don't know yet. So I think it's also in the market, this kind of magical thing that's still up in the air a bit, as far as I'm concerned, or sorry, as far as the customers I see are concerned. It's not really, really there yet, apart from services, like cognitive services and that kind of stuff. But I can imagine that you see AI in real life in action from a lot
Oscar
More effective, right?
The importance of knowledge sharing - 13:35
Eve
Yeah, it there's a funny that you're saying that the customers and the clients are maybe not there yet. This is where we come in the picture. Because what we are doing as consultants is that we are putting together the best fitting solution to the client. And we don't need the client to tell us necessarily what they really want, they can tell us their problem, or what they might want to try and fix somehow or what they want to improve in. And then we come in and come back with a few solutions that we find as best fitting. But again, going back to that part where we were I was working together with a software developer, and as an AI engineer. And the only problem there was really that the tool we use she didn't really know about because she didn't work with it. Yes. And so what needed he needed to do is to first I needed to tell her show around how does this tool work and how to get from A to B but that because she was she's an amazing developer, so it was not a problem for her to have write me the code everything what we needed to do, it could have been data transformations and visualization. We went all the way to creating a recommendation engine together. So I think it, it all depends on how the team can work with each other. So how do we talk with each other, if we can get on well with each other, and then the knowledge, the knowledge sharing is key as well in these cases, and then there's just, you know, it's just a nice mood that you need. And then working together is easier, in my opinion. But again, most of the time, we are working as big teams. And very rarely, we are going out as an individual to a client. So when it is a big team, we at least have one person for each desks. What I mean here is, for example, for one of our current clients, we needed to have a lot of security engineers, all kinds of people with knowledge with this, networking and, and our back and all these things that I am not even sure I understand it well, fully. But I'm then on the other side, where I needed to put together code for, for our data pipeline. So then we've worked together, that we had each other, we did a lot of knowledge sharing so. So nowadays, I am also able to give our back rows, which is cool. But you know, you had to get there somehow.
Rick
Yeah, that's true. And I think you hit the nail on the head, when you say it's all about knowledge sharing and communication, right? I mean, you need to understand from both sides, what the others doing and why
Oscar
You always need a slight overlap, at least, to be able to find the common and let's just call it an API between what you're doing, the other team is doing what the other person is doing in a team. Because and I think there's also the success at the moment, which I am seeing a lot from the developers’ perspective, let's say that in something like cognitive services, because it is just a piece of documentation, you know, what to expect from him? Where I've seen where you still have, like a model in development, thinking about, can we even do this? It's a bit more vague, until it's proven that it works. And especially for the customer, like just wanting a magic solution? It is, it's hard to sell.
Rick
And that's where Eve and her team come in. So apart from ai 42, you also are contributing to global AI community. And I think Hank Beaumont is someone who also really enthusiastically advocates there. How is it working in? Well, actually, what it says in the name already a global AI community? Because your audience is potentially humongous, right?
What’s next for the AI community? 17:44
Eve
Yeah. Well, what's what is our aim always, I mean, at least in my side, on my side, because I am the one who is working with the Nordic area. And during the COVID, period, it was very difficult because we needed to go totally online. And I am worried a bit that we are sort of losing the community in the Nordics at least, and it's now we tried to go back live and try to organize meetups and get back in the physical meetups and, and the physical conferences. And this is a very good time to continue with all these movements. So what comes next, at least from my side, I would like to get together in either Denmark or Norway or, or somewhere around here, where we could like meet in person and get some speakers from somewhere. It can be anyone who is interested in giving the talk about AI. Because we are interested in every kind of stories, because that's what the community is about. We don't necessarily want to hear again, the most expert stories from Microsoft, or we don't really want to hear about all these high level stories where we just quickly introduce a new tool or new service, a new feature, but we would rather hear stories from those who are working with AI but not necessarily yes speaker at a conference but someone who would just who has just made something amazing and just want to share it
Oscar
yeah in there is the actual learning like the products that are launched or those I know what your mean, those demos are cool, but I think a cool war story about someone who did something for a couple of years and can tell you about all the things you shouldn't do
Rick
And has the scars to prove it right.
Oscar
Those things are the best indeed. But would you then go for like as like Ai specific conference or try to find conference and to make this topic a bit more to get this better on the map on existing conferences.
Eve
Now I was thinking more of smaller meetups or something, because then it can go all around the Nordics again, so maybe do something in Denmark, calling in people who would like to share something in here, and then go to Norway and then to Sweden and then, you know, to all around Nordics to see all the cool projects and all the cool initiatives from others in the area, I didn't think about making this into a conference yet, because it's important to find the people again, because ai 42 has like 400. Signups now for two nodes for sorry, 800, Around 800 people in the community right now in ai 42. And I do believe that this is still very global. So if I, for example, check how many people are signed up from Denmark. That is, that is probably good enough for a meetup. Because then, then we don't need to do something big just to meet with these people who are interested in these things. And the important part is, again, the communication in here, we would like to get to know each other, that would be the greatest part. So we, we can start collaborating with each other. And that's what I see more in this now.
Rick
But that's always where it starts, right that you get to meet each other and get to know each other. And then, like, a few years later, you think, hey, I would like to invite a new guest to a podcast. Well, let's, let's invite Eve, because I talked to you a couple years ago. And then you were very visible for quite a while. So you do international conferences. And then that also enables us to think, hey, this might be a person that's interested in, in sharing their story. And then that helps in connecting it on a personal level as well.
Eve
Yes. Because, for example, when you go to a conference, you might meet one or two persons, and then you can just have a chat with those people. And, but again, they're like 300, other people you haven't even talked to. And what I'm trying to aim here is to have let's have a smaller community that is that knows each other. And that we can talk with each other easier. And let's start in small, because then we get to know each other in person. Yes. That's the aim. Yeah.
Rick
We had a bit of a communication before we, we started the show today. And then one of the things that you that you sent over you, you wanted to talk about was the importance of understanding and controlling these intelligence systems. So talking about AI. And then let's start with the first part. But I'm really also very interested about the fact that you talk about controlling the system. But let's first try and explain what are these kinds of systems? So how, how can we understand AI in general?
What is interpretability and how does it affect the decision-making of neural networks? - 23:21
Eve
So there is there is this new, amazing Microsoft AI principles, for example, that is something that is very much in focus. Now, for me, at least, the whole story started for me from that that few years ago, we looked into, are we able to tell what our model does? That was the question just like that, and free as, as AI engineers try to talk about this a lot. And we were just saying, Yeah, I know how it works because I've built the code. But then again, you build the code, you know, the mathematics behind, you know, the, the whole set up the architecture, how does it work? Yes, of course, but then you feed in some data, you start to make some improvements, you get out some nice results, you get these metrics that is telling you that that your model works as it should, and it's amazing. And but again, we don't know, like, what the model does exactly with this data. I mean, we all know that, okay, neural networks are connected like that. We have all these nodes, and then the data is going through on our DS and then somehow it makes a decision based on this information. But what if I'm telling you that I was building an NLP solution, where, let's say it is making sentiment analysis maybe doesn't matter. So I'm feeding in like a sentence and based on the sentence it makes a decision whether it is positive or a negative sentiment sentence, and then it spits out that it is negative. And then I'm wondering, why did it make that decision? This sentence doesn't sound that negative, maybe. But it's understandable from my point of view that okay, I may be thinking differently than my AI. Why is that? And all these conversations, all these things, thinkings, eventually ended up to a point where we looked into first into this responsible AI principle principles of Microsoft, which is aiming for providing developers and clients as well the way of being able to understand and control these AI systems. For example, what I was mostly involved in was this interpretability, which explaining your models. So it is going to tell you, for example, that from your data set, it is looking at this and that specific parts of the data, because that is affecting the decision the most. And also it is looking at different errors that can happen based on your data, and a lot of lot of flow of nice information that you might want to know. And based on this information, you can better train your solution. If I'm saying that, in the sentence, this word made my model think it is negative always because the model got a lot of examples where that word was there, and it was always a negative sentiment, then I'm saying no, that shouldn't be the way of making a decision, especially when it's NLP. Because NLP is very, very crazy level, I think, on the AI world, because NLP is still based on understanding texts, and it's very difficult to set up understanding. You know, humans are talking in a way, it has sometimes the background meaning because how they feel how they want to behave at that moment, how do they think about that topic, eventually what they are talking about, and that is all affecting the way of talking. And a positive sentiment sentence in written format, can still be negative because of the use of words, or because of using a different setup.
Oscar
If you don't know the context. And also, if you don't know the person, the same sentence can mean couple of things like we've all been in companies where there's a email to all and some people are pissed off. And some people think this is a logical statement.
Rick
Yeah. But also, you could even see if somebody is a native speaker, a completely same sentence can have a totally different sentiment, based on if somebody is a native speaker or a nonnative speaker. Exactly. So yeah, there's a lot of context that you that you lose, when it's only when it's only written. But I think we see that even in sending out WhatsApp messages, right? If somebody can misinterpret what you're trying to say, then chances are somebody is going to do that.
Oscar
How would you do that in NLP, what that means that you need more context than the sentence itself.
Eve
For example, via the so what we can do, first of all, is to use this interpretability package that is actually open source. And what I'm using it for is that I am putting it on my model, and it's going to explain me which sentence no sorry, which words in that specific sentence, are helping my AI to make that specific decision. And then I can see that, okay, maybe this word shouldn't affect this sentence in this way. So I can, you know, make some changes in the futureization part or when I'm feeding in the data, or when I'm feeding in the in sample data or whatever the training data, then then I can still make some fixes and changes to make my model able to do a better decision in the end, then I'm saying better. I mean, a bit more. How do you say more human like
Rick
More, more nuanced?
How do you know if you’re making the right decision? - 29:36
Oscar
Yes. But in the end, what what you're doing is you are making that decision. So you're putting part of your personality or thought process in that model. And where are you then like, give it another bias because of that.
Rick
Let's let's see the fact that Okay, so all AI says In the end, probably depend on the data that you train them with, right? And then we can, we can see a lot of scenarios out there, where certain AIS have been trained with data that excludes certain parts of our culture, rendering the AI useless for people, other than what it was trained to recognize. So that's also something really difficult because you don't know what you don't know. Or maybe I should say, you're blind for your own reference, because that's your reference. So you need to explicitly look outside of the boundaries of what you know, to make sure your data is correct. And I can imagine that's a difficult task. So are there are there tricks to help you with that? Eva
Eve
So, what there are a lot of things that that is helping us nowadays, because we have all kinds of techniques that is helping us how to, you know, control your work, your own work, because what is very important here and as reacting on this, that it's going to be biased because of me? Yes. But if you're aware of for these, all these problems that all these issues that can affect your solution, then you can better? Better, you know, develop these things, like better improve these things. So what I'm usually trying to check for is always this to see what are the important parts of the data that is affecting the decision? Yes. But then again, there is a lot of important steps to make, like see whether the model, you know, is fair, what I'm saying here, is that they're like, for example, this fairness, as well as a framework that you can use, or there was this, what was the word? So like this accountability techniques that you can use, which I can't remember now, what are those techniques actually called. But these are really useful for, for making sure that you're saying that, okay, yes, I have made this solution. And I'm taking the responsibility for for that. But I'm always or was checking for all kinds of errors, all kinds of biases, and wanted to make sure that my model, I'm making a decision that is not leaning towards to one or the other, but is trying to stay fair with everyone. But there was this project at some point we have been looking into a few years ago, I was at a bank, which a very old bank, it was and, and they had like data from from the previous 100 years, or I can't remember exactly how many years it was, but very, very long time. And a few years ago, loans were only given to men. And that was sort of the rule that was the way of going. And then nowadays, when they wanted to build an AI on this, after a few months, when it was in production already, they realized that they haven't been given loans to women at all. Never, it was always always canceled for for every women when they submitted a loan request. And then they took a look at the data set and took a look at what could have happened. And it turned out that the data set fed into the AI solution was biased towards men. So it was getting more experienced data, more examples of men getting a loan than women. Eventually, it was almost just men. So they started to take out the important features from the data set. So for example, if you say that the column where you say this men or women, then you can just say that that is an important column. Yes, but it is also a dangerous one. So it should not be really used when making a decision. If there is a problem in making a decision. That's not the column that you should use to strengthen your thoughts.
Rick
Yeah. Yeah, I can totally imagine. Maybe taking out certain elements of the data that you feed into a model helps in making it more fair towards all data coming in.
Oscar
But yeah, this story tends to make you think like, Oh, if you want to do something now for the next coming years and make a model, maybe you should not base it on the past 100 years, but on the past three years.
Rick
Yeah, problem
Eve
Yeah, or just or just make some fixes on the data? Yeah, yeah, it's not a, you know, you're not altered the data in a bad way. In that case, you only make it fair.
Oscar
And I think this, this will this, you really see this now in this line of work? These things are they're all always important but you're so you need to be so much more aware of biases and influences and, and context than someone writing code that will execute the same way all the time, and will never change until you change a line of code. This is this is a most you're talking about the check you can do for yourself. Am I actually is this is this fair model? To check yourself as almost like we, I'm used to dealing with, Oh, am I vulnerable for SQL injection, or did I do? What kind of security checks do I do I check that you are now like, Okay, we have to model working and it predicts something that I wanted it to predict. But let's check myself. I didn't introduce any weird things that I'm not thinking about now. That is yes. But it feels much more human to actually decide what is good and what is bad there. Then all of the other tech problems that I see
Totally random question - 36:28
Rick
Yeah, yeah, it does. Oscar?
Oscar
No
Rick
Do you know what time it is?
Oscar
Is it time for a totally random question
Rick
It is time for a totally random question Eve what movie quotes do you use on a regular basis? If, if any
Oscar
Haha, thats random. Rick, you did it again.
Eve
Yes. Can you repeat it again?
Rick
Yeah. What movie quotes do you use on a regular basis? So one that might be interesting is and I'm not saying I use this on a regular basis, but a awesome movie quote would be you can't handle the truth the truth. That's, that's a good one. Or show me the money.
Eve
Let me think, Wow, this is a difficult question.
Rick
Can be from from a series as well.
Oscar
Oh, really? Ar you into movies?
Eve
I am in to movies, but the only quotes came to my mind is a bit. Not really something to share. Live, because it's not that's nice.
Oscar
Try us. We'll bleep it out.
Eve
Okay. No, I should really shouldn't this. This is this is really just what kind of came to my mind that almost all men must die. But we are not men. So it's not not so funny. from Game of Thrones.
Oscar
Game of Thrones. I actually saw some I was of course I do some research. And so does Rick. So we check out with our guests. Like they have YouTube videos and stuff like that and actually saw a picture of you on a big game of thrones like thing with guitar.
Eve
That was in Amsterdam, at one of the tech Rama conferences. That was a few years ago. Where I was going to this I can't remember. Where was it exactly? There was this big building next to some water. Okay, like everything in Netherlands.
Oscar
Yeah there are some buildings next to water, I know exactly what you mean.
Eve
Yes. And then if you go on the top of that building there was there was a swings as well. Which you can use to a swing out and nothing which is really cool. And, and that's what was there. Was this thrown over there. Oh, I have something good though. I think
Rick
Oh, go go.
Eve
Reality is almost always wrong.
Rick
From what movie or series is that?
Eve
Is from Dr. House.
Oscar
Ah, yeah. This like, I use the other one from Dr. House. All people lie.
Eve
That's also good.
Oscar
Yeah. If they know it or not sometimes, Yeah. Yeah. True. True. But yes. You use that in a daily basis?
Eve
Yes.
Rick
Especially in your line of work in your line of work? Probably.
Eve
Yes. And always when we talk about AI and movies, it always comes to my mind. And it is 20 years ago, there was this movie, which was the title is Simone or sim one. Yeah, simulation one. And that movie was one of my favorite things ever. It is bringing in AI and the holographic sector as well into one movie. It was 20 years ago. And today, this is something that, that is reality sort of
Rick
That's that's the move with Juan Joaquin Phoenix, right. And he falls in love with the with the AI.
Eve
Yes.
Rick
Yeah, it's actually pretty impressive. One thing
Eve
I'm just checking was he really in that movie? Cause the one I'm talking about? From 2002. But she know, Catherine Keener. I can't remember if that guy was there.
Why GPT 3 is a big deal – 40:40
Rick
Yeah, I'm not sure either. One thing I wanted to talk to you about real quick or not real quick, because I think it's quite a big topic is there's been a lot of buzz around GPT 3, and then now it's also available on Azure, I think in a preview, but I'm not entirely sure. Could you explain why GPT 3 is a big deal.
Eve
So GPT 3, has coming in with a lot of cool stuff. Actually, this is a really nice new transformer, which is trained on much stronger computer as well. And this has this open AI thing, like I don't, the problem is that this is a topic that might be a bit of an NDA as well, the things I know. And it's always difficult to talk about this, because it's one of my favorite topics, actually.
Rick
Let's keep it high level, then.
Oscar
I love to know stuff that I shouldn't know
Rick
Yeah but we might now want to share that in the podcast
Eve
Yeah, so no, but so there is this open AI, which is now I think in closed preview, which means that it is only given access to those who, who are like the chosen ones, like me. And, and this is something that is used by GitHub as well. GitHub co pilot
Rick
And I love co pilot.
Oscar
It is available. General available this week.
Eve
Nice. Okay, cool. That is not as not as secret anymore. So no, but the cool part of this, it is focusing on NLP solutions. So what it is trying to do just like the copilot does, it's supporting you while you're coding, let's say didn't activate like that. But what I have seen from this thing is just mesmerizing. For example, my favorite solution was when I tried out with one of these features, where you have compute fitting best on this scenario, but what I needed to do is to write some code for me for training model. And then I gave him only some sentences saying, Hi, I would like to have model training code for regression. And then it spits it out, me the the data transformation, the data preparation, the training of the model, the how to score how to evaluate and everything. Yeah, I was just writing it one sentence, tell him what to do. And under like a lot of more cool stuff that this GBT free is capable of. And, and the nicest part is what is coming after this, which I'm also not sure if it is something like, this is also based on a lot of different other technologies. But this is also for deep neural networks. And it was a bit it was it is building now on very hardcore GPUs and whatnot. And this will be something like a planet scale. And elastic scheduling of AI workloads. So the idea is that building this, what is it called Building singularity? To, you know, to be able to to create? Very strong very, I don't know how to explain it, it's going to be like, amazing, I think. So you build like AI workflows that can process planet amount of data. So like, I don't know, how can I say it better? It's a very, it's really, it's really interesting. There's like a 14 pages long study about how would this work and how does it look like? And that is something in research right now. And we call it
Rick
No, but that's, that's actually Well, it sounds. I mean, okay, let's take a few steps back for a lot of people. The internal workings of AI currently are very hard to understand. This sounds like it's not one or two, but six or 10 steps further,
Eve
Yes.
Rick
Oh, yes.
Oscar
Will this will this change your work will to change the business?
Rick
Will it change our work?
You can’t use a tool without knowing it – 45:19
Eve
I think it could improve how we process information. Let me say it nicely like that. What I'm trying to tell here is that I don't think that this will affect our jobs in a bad way. Because AI engineers and software developers know, all these are needed to control these systems anyway. So even if you have, it's, I imagine it like, hardcore version of Azure machine learning studio, or something similar to that. And you cannot just use a tool without knowing it. So it will always be just a new technology, which we will be always worried about that, what will it bring, but all that it could bring is, if the regulations are in place, then we should be fine. This is what we usually say. But again, we can talk a lot about all these regulations, whether they are in place properly, still, we are making mistakes. Still, we have places where we could improve our not just the code, but our behavior when building systems like AI solutions, and what that because sometimes doing cool stuff as taking the focus away from being responsible for what you're doing. And so that is something that still needs to be in place before. We before I think, was all the open AI and also the singularity is given away to public.
Oscar
Yeah, it's like, make sure we have like a power plug to pull when it becomes destructive
Rick
An emergency brake right, a big red button, we can push when, when everything seems to fail
Oscar
It's all fun in the movies. Like, we're talking about movie quotes. Anyway, what you're saying is like, we were so busy looking into if we could do it, not wondering if we should do it. But I hope indeed a lot of responsible people are working on this problem.
Rick
Yeah. And I I'm confident that there are
Eve
Yes, I really hope so. And I also hope that in the future, we will be even more focused on you know, during the hiring process during making a certificate during all these steps, getting to be an AI engineer, we are checked and tested, that we are aware of what you're doing. And we are aware of all these possibilities that AI can bring just by Yes, just by looking at the movies. Yeah, sure. Because that is what those so what I'm always I was always saying that when I was having like these interviews about HoloLens or AI, and what if, every time a new technology comes in, we are worried and nervous. As long as we don't really dive deep into these topics. So for example, when when I started to talk with someone who was very worried about all the Skynet stuff and all these amazing movie hits, be talked a bit about what I am thinking AI was and what is what my work is, and what what do I really do as an AI engineer and where we are actually as building an AI solution to clients. And after a while, this person started to realize that yeah, if there will be ever something like a Skynet situation, that not going to happen nowadays. Yeah, people are afraid of things they don't understand completely.
Rick
I think that's that's been the case
Oscar
Ever since. Cars and phones and of course
Rick
The internet
Oscar
we've got them too much. But in the end, I think we will find a way to live and it always proven that the quality of life improved in the end. So we don't have evil villain with full control over one of those superpowers. We should be fine. But yeah, watch your movies to be aware what kind of stupid mistakes could be made.
Rick
We should we should. Eva, is there anything that you would like to get back to or that you would like to add or that we have missed during this during this episode?
Eve
I think there was something we talked about in the beginning with regards to why clients don't really know what they want. And at that time, it came to my mind. Yes, because clients often doesn't even know what are their possibilities. And here we are not talking about, for example, cognitive, cognitive services. But I could bring up a lot of solutions that can be better shaped into client. And nowadays, we have so many improvements in every features of Microsoft Azure Machine Learning workspace as well. The cognitive services do, yes. And also a lot of amazing, cool new frameworks and features are coming in from all kinds of providers that if the client tells us that, hey, we want to get this improved, or get this in place, it's just need some creativity. And that can be in place, just don't, don't worry about asking things. So I think like that applies to everyone that should just ask questions, and then you should get your answers. There's nothing to worry about anymore.
Oscar
No, I think you're right there. And it's also true for building let's say, regular software. If someone describes you the solution they want, you're having the wrong conversation, you need to know their context. And they should describe the problem they're having. And the specialist will will pull the doors out, no one's seen before to solve this problem in a creative way. And I think that's the fun thing. And that's why we're getting the certificates and all the things and just checking out the news, the new bits of our work.
Closing – 51:36
Rick
So we are here to tell everybody to end on a movie quote. There is no spoon. Eve, thank you so very much for being our guest.
Eve
Thank you a lot. So it's nice and very fun.
Oscar
Thank you. Thank you for listening to Betatalks the podcast. We publish a new episode every two weeks.
Rick
You can find us on all the major streaming platforms like Spotify and iTunes.
Oscar
See you next time.
Rick
Bye.