But before we get on to the project itself, let's take a look at the world of burns through a few statistics. The first ones sound pretty good. Every year, "only" one in a hundred Czechs suffers serious burns. Fortunately, 97 % of these burns can be treated on an outpatient basis. This means that only three people out of every ten thousand end up being hospitalised. Unfortunately, this is where the good news ends and the really sad news begins. 40 % of burns involve children - and burns or scalds are the third most common cause of fatal injury. Also, burns often cause fatal complications, even in cases where they don’t seem too severe at first.
This brings us to the key topic of the project the SYSCOM Software development team is in charge of.
Speed is the decisive factor
The assignment from the University Hospital Královské Vinohrady was this: let's use the potential now offered by artificial intelligence and image recognition methods to give doctors a means of identifying the type and extent of skin injuries with a high degree of probability. As soon as possible, to enable the optimal course of action to be planned and the correct treatment to begin immediately.
AI helps determine the type and extent of injuries
It is no coincidence that the University Hospital Královské Vinohrady chose SYSCOM Software, as both have been working in the field of burn recognition for more than ten years. In the past, they have already worked together to simplify the method used to take and encrypt photographs. "We are currently working on two projects. The first is the DASUV information system, which is used to take photographic records and store the data securely. The system includes consultations with emergency services. Simply put, when an ambulance carries a patient with burns, it takes pictures of everything and can consult with the doctor," says Jitka Schořová.
The second project is already related to artificial intelligence and neural networks. "This is a grant project announced by the Technology Agency of the Czech Republic and its goal is to use deep neural networks to learn to recognise specific formations in photographs. Specifically, it can identify burns, detect human body parts and assess the risk of possible skin diseases," adds Jitka, who is responsible for project management and the development team at SYSCOM Software. The two projects should then be combined into a single solution in the future.
AI can't replace doctors, but it can help a lot
The process as a whole does not sound too complicated at first. "The doctor usually has a mobile phone to take photos and upload them to a mobile app. Nurses or other doctors can then work with the patient’s barcode on the tablet. They take pictures of the documentation, which is uploaded straight to the server," says Miroslav Volek.
Unfortunately, healthcare is in many ways more specific and demanding than other sectors. This was already evident when it became necessary to obtain a basic dataset of photos to train the AI. "Information protection, GDPR, medical secrets, the way the data were stored on disks - these were all obstacles we were not allowed to circumvent. On the other hand, we knew that right from the start we had to have a critical mass of images and photographs to give us the most accurate results possible," adds the systems engineer, who is also the lead project manager for the University Hospital Královské Vinohrady.
Annotations take months; actual training takes hours
It is also the annotation of datasets that takes the most time on the whole project. "If you want to train a model, you have to take each photo separately and caption each one together with the doctor. Point to the site that interests us and define the exact degree of the burn. It's like when you're teaching a small child. The image is then exported together with the captioned data. But for the system to start reporting relevant data, you need thousands of these annotated images. Of course, it depends on how sensitive you want the model to be and the complexity of the topic it is supposed to recognise. Training the model then goes quickly, we’re talking about a matter of hours," adds Jitka Schořová.
The project is now in the testing phase and the task of the developers and doctors is more to debug, modify and improve the model. With each newly annotated input data and each retraining, the recognition becomes more and more accurate. "In that first and quick assessment it is able to make a very accurate diagnosis, but of course it can never stand in for or replace a doctor. The doctor always has to have the last word and decide on the ideal course of treatment," concludes Jitka Schořová.