CLOUDY podcast | #01 Star Wars is great, but we don't have two suns on Earth yet.
- News
For many people, AI is mainstream, but for many people it is still a scarecrow. Should we be worried when using artificial intelligence (AI)?
Worry is a very strong word. I would rather say to be careful or somewhat prudent in the use of these technologies. Whether it is the generation of some images, text, or the perception of broad realities.
Can it be said that AI can be a good helper for us?
Sure, it can speed up our day a lot.
When we do something monotonous or need to generate texts, correct grammar, when writing emails etc. it can speed up the process. But you shouldn't believe everything that artificial intelligence generates, because it still likes to make things up.
How helpful can AI be?
The short answer is very. We can teach in a simpler way if we take it from an education perspective, whether educating the older generation or the younger generation. Those learning materials can adapt to the quality or to the pace of that very student.
It can also speed up the process for us in developing some software, providing aids, whether it's test scenarios or what we call templates, sample codes. In marketing, it can make a plan, analyze articles, texts, do quality research. AI can optimize processes in companies.
Can AI even replace an employee in a regular company?
Not at the moment. There have been (and are) attempts, but it often leads to the fact that instead of some more senior employees, I don't mean in terms of age, but in terms of experience, who are also a bit expensive for that company, they want to replace them with a fresh graduate with AI tools.
Often it ends disastrously as that experience of the people there is lacking and they don't really check things. With AI, the fabrication is still strong, and unless those models are fine-tuned enough and not as trustworthy, those results are many times disastrous.
So can AI be a good servant but a bad master? Can it also be used negatively?
Sure, it unfortunately often happens. A lot of AI startups were created as some kind of extension of ChatGPT, that they de facto just renamed themselves, used what OpenAI created and made some of their own underpinnings and utilities on the basis of which they pulled money from people.
Whether it be some scam emails, scam messages, and they were able to automate the whole thing using non-AI technologies. I mean, they were sending some kind of scam emails or messages to hundreds or thousands of people at a time, or even extortion within the content, where they were manipulating people's faces into other characters or putting them in "bad" situations or even stealing their voice.
Fake news, photo editing, etc. Do you have a specific example?
Unfortunately yes, it's often used on the older generation when somebody calls them from some unknown number or I can sit here and make up some other phone number, call family members with a stolen voice, in terms of "I have a problem, send me money on my account" and things like that.
Even in high school - cyberbullying. People have great creativity and if anything I post on Instagram, Pinterest, Facebook, it's like nowadays we can crop that image and we can already fit it so well on another image that it looks like we did it in Photoshop or some other editing software. And it can portray us in unpleasant and unfavorable situations, and then it spreads.
So how can we tell the difference, how can we tell if it's fake or not?
If it is just a face, it is very difficult. You have to look at the contours, the lips, the ears, the teeth, the eyes and the fingers. If I look at it the first time and something doesn't feel right, it's worth looking at it again. And suddenly it's obvious - why does this person have six fingers, two sets of fingernails, his eyes go in an impossible way, he has two pupils, his ears are generated in a weird way...
The shadows are already solved well by the AI, it's not that easy anymore. But you have to watch the background because Star Wars is cool, but we don't have two suns on Earth yet.
The European Union has prepared the so-called AI Act. Where it wants to fight against this kind of thing and if this goes into wider effect and is introduced to the general public, every image or video generated will have to be clearly identifiable that it is generated by artificial intelligence. It may be some kind of watermark that cannot be removed or should not be possible to remove.
Let's look at it through students, for example. What is your opinion on the fact that students are also using AI and doing seminary, diploma or bachelor theses with AI? What is your view on whether this is right or wrong?
Very good question, we just came across it this summer because I'm still at university, so I've been to such thesis defences where there were students presenting their work. But my experience is that those who used AI were very easy to self-identify. They generated an abstract of their thesis, bachelor's or master's thesis, and they presented their work. The problem was that they didn't know what they were doing. They generated something and they didn't even read it. After three or four questions, we would look at the student and say "well, Mr. colleague, that's great, but you don't know what you're talking about.."
But those were exceptional cases. There were also cases that the student wanted to make it easy for himself, he had it generated, but he knew what he was talking about. So when we started asking quite in-depth questions and he knew what he was talking about, I said, that's good for me.
But it shouldn't be about the AI generating all my work. If a student doesn't know what they're talking about and they're going to have it all generated, then if they do it within the confines of the campus, where that integrity is a little bit greater, then what are they going to do in practice? Because the AI will generate it, but it won't get them very far because they would not understand the basis of what they are doing.
Where will it go from here, what's in store for us in the near term in conjunction with AI?
The models that we have will be fine-tuned, they will be better. The problem is more how do we get the data to train, how do we train it. But we can get very far, whether it's tools in medicine, in business, in technology, in development... the potential there is great.
This article was based on a podcast (in Slovak). You can listen to the full podcast on SPOTIFY or watch it on YouTube.