CLOUDY podcast | #11 AI in the state, in business and at home

  • News
The eleventh episode of the CLOUDY podcast is dedicated to AI, i.e. artificial intelligence, at the level of a person, a company, and a state. Host Andrej Kratochvíl and Anton Giertli, Senior Solution Architect at Red Hat, talked about whether AI is a threat, what mistakes companies make when using AI, and what positives AI brings to human lives.

Can AI be a threat to humans?

I don't think so. I think humanity is quite resilient, we've survived three industrial revolutions, we can handle AI. I'm not saying we shouldn't be careful, but I think there are more benefits than negatives to AI.

Can AI replace humans? Should they worry about losing their jobs?

I like to think of AI as an assistant, as a tool that can accelerate activity. It can also change the job market, but market is constantly changing. In the past, we could have had the same debate when computers came along and the position of typewriter repairman wasn't as relevant anymore.

Today, no one even addresses the fact that that position has disappeared.

What is the difference between Open-Source and Closed-Source AI models?

Open-Source is software that is often free, but that's not the rule. It must have freely available source code that was used to build the model. It must have a license that is freely distributable and modifiable. There must be no signs of discrimination, so it meets ethical standards. At the same time, we have a better overview of where the data comes from.

A close-source model is, for example, the current ChatGPT model, we do not see the source code for it. But probably such a solution will also be used by some Open-Source projects or parts of it.

How can someone get started with AI?

AI has been around for a long time. Some algorithms were developed in the 1960s, so it's not new in recent years. Back then, we were dealing with predictive AI - e.g. a payment card transaction that the system evaluates as suspicious and blocks the card.

Today, with the advent of OpenAI and ChatGPT, AI for the masses has expanded. To get started, all it takes is for someone to open www.chatgpt.com and type in a question, it could easily be a request for a recipe. You can still get started.

Where does AI get its information from?

We don't know exactly, it's not documented, which specific sources OpenAI, for example, used. We have a rough idea... Anything on the internet is theoretically publicly available for AI models as input data.

How is data corrected, for example, in ChatGPT? How can we trust that the data is correct?

ChatGPT also has a warning somewhere at the bottom that it is an experimental technology, check every answer. Answers are only as good as the quality of the input data. If something is factually incorrect on Wikipedia, then most likely the answer will be incorrect in ChatGPT as well.

In the first versions of ChatGPT, the problem was that it did not know some of the then current information. However, the model did not know that it did not know this information, so it started to make it up - these are the so-called hallucinations. The supplier should therefore debug those things.

An individual cannot completely fix the model himself. He can try to report an error. However, we also have Open-Source AI models that are available to the public and there the user has more influence and can report an error much more effectively.

Let's move on to companies. What are the most common mistakes companies make when starting work with AI?

The first mistake is underestimating the quality of the data. If someone has data on paper, in filing cabinets, or has data scattered across multiple databases, or has duplicates, it will be very difficult. The quality of the input data significantly affects the quality of the output.

The second mistake may be the inappropriate choice of an external company to provide it. This mainly concerns smaller companies, for which it is more profitable to deal with such a service externally.

What are the pros and cons of AI for companies?

Outsourcing work with AI can take several forms - a company can operate the solution on its own infrastructure or can produce a solution for a contracting company. Supplier companies are usually large companies such as Amazon, IBM...

The biggest advantage of having the solution in-house is that I have immediate access to the infrastructure.

On the contrary, when I have an external provider, I don't have to deal with graphics cards, computing power, etc. The disadvantage is that the more our solutions are used on someone else's infrastructure, the more we will pay for them. We are actually penalized for the success of the solution. If I had the solution in-house, it would be exactly the opposite.

Do we have successful AI projects in Slovakia?

Last year, I was at the ITAPA AI conference, and a Slovak company presented AI software that accelerated the diagnosis of heart attacks or coronary incidents. When someone was in the ambulance, the software could evaluate how serious it was based on some input data. Then someone presented a solution for early detection of prostate cancer.

Another example is a voice assistant that called seniors automatically and reminded them that they should take their medication or that their medication was running low and they should get it prescribed. These are the meaningful uses of AI and we can be rightly proud of them, that they originated here, in our Slovak-Czech region.

Should the state support such projects?

Yes, of course, you can't say no to that. One thing is support, including financial support, the other thing is that it would be great if the state removed those obstacles for companies. To minimize bureaucracy, simplify legislation, etc. Of course, we shouldn't forget about security, we can't jump into it headfirst.

You can listen to the entire podcast on SPOTIFY or watch it on YouTube.