Better to have AI as a friend than an enemy

17. 6. 2024

 

I can think of three major technological innovations I've seen in my career: the internet (connecting everything), mobile (connecting all the time), and big language models (LLM, generative AI). The penetration of the first two has been relatively slow (within years). AI, on the other hand, has swept in like a hurricane.

Like any big change, it is accompanied by many emotions, excitement, amazement, fears and doubts. I have to admit that I myself am amazed at what it can do: write an annoying text for a tender, explain a sentence whose meaning I don't understand, solve a logic problem, create a picture (politically correct) with the most ridiculous nonsense, or even design a recipe from given ingredients. And I can add countless other examples.

Meanwhile, many people worry about whether AI will start taking our jobs. I agree with the view that it will only take jobs away from those who don't know how to use it (and thus become uncompetitive). That's why I'm trying as much as possible to examine where generative AI can make our jobs easier at Aricoma.

First successes

Where are we today, after about a year of "courtship"? AI has settled very steadily with our programmers in the form of GitHub Copilot. It is a smart whisperer that suggests smaller or larger pieces of code depending on what the programmer starts writing. The number of colleagues using the tool continues to grow. Moreover, some of them are now experienced and know well in which situations the whisperer is most useful, and how to use it when writing source code (how to start writing code, what to put in comments, what files to keep open, etc.). In our experience, the tool can save dozens of percent of time. Of course, it's not perfect (yet). In fact, in some situations code drafts may be inappropriate and need to be discarded.

ChatGPT has proven to be another useful helper for certain types of programming tasks. Interestingly, its input does not have to be just text input, but can also be graphic. For example, this is how we created the base HTML code for visual GUI components based on their graphic design.

What will we no longer have to do?

Besides this, we are looking for ways to use generative AI tools in other areas. For example, the work of analysts is closely related to the processing and creation of textual information, so it seems natural to test the possibilities of LLM here. What about, for example, having a list of use cases created from tender documents and other available materials?

Another promising direction where generative AI will help us (and already does) is testing. In addition to writing automated tests, where the use case is very similar to that of programmers, there's for instance the generation of test data. All I have to do is enter into ChatGPT what types of data I need and what format the output should be in, and I have data ready in no time. Or it will write a script that generates the test data.

Last but not least, ChatGPT has repeatedly helped us with the preparation of texts for the offers. Especially where we need to describe various methodologies and procedures of a general nature. And even better in situations where we need high-quality foreign language text. The question is when will customers understand that it makes little sense to require such assignments, just as some schools have come to understand that certain assignments are useless for students in the AI era.

In addition to GitHub Copilot or ChatGPT tools, we use OpenAI API services in specific cases. For example, we have converted source code from one programming language to another in this way.

New skill - input entry

In the future, I expect interesting side effects of using generative AI. For example, what form the documents that are used as inputs will take. I am thinking in particular of analytical documentation that serves as a basis for various subsequent activities. In addition to its human readability, it will therefore become increasingly important that it is also well processable and "understandable" by AI tools.

Ultimately, the key aspect of using AI for us is safety. In particular, to ensure that our code or information does not leak uncontrollably "out into the world". We use AI tools in editions that ensure a high level of privacy and data protection. We do not use these tools on projects where we suspect that the use of these tools could be problematic.

The advent of generative AI is a leap into a different world. It will help in some areas to a degree we could not have imagined not long ago. But it also brings many question marks, of course. Just how much of a strain is running AI computing centres? To what extent do we as humans become "lazy" again, forgetting certain skills and becoming more vulnerable? How does society deal with the fact that any text, image, sound or video can be fake? Reasonably, I hope. Or do we wait for artificial intelligence to sensibly begin to manage us?
 

End note: I indeed wrote this article all by myself, without the help of AI. How many more of these will there be?

"The question is when will customers understand that it makes little sense to require such assignments, just as some schools have come to understand that certain assignments are useless for students in the AI era."