Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)
Permanent URI for this collection
Browsing Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ) by Title
Now showing 1 - 20 of 145
Results Per Page
- ItemOpen Access3D scaffold development for tissue engineering
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Πάντσιος, Πασχάλης; Δεληγιάννη, Δέσποινα; Αθανασίου, Γεώργιος; Μαυρίλας, Αθανάσιος; Pantsios, PaschalisDuring this thesis a novel perfusion bioreactor was constructed. Based on an existing bioreactor apparatus , investigating previous researches and using spare parts found in the Laboratory the bioreactor was designed and assembled. It was tested for functionality and non-toxicity by culturing cells in it and it passed. In association with Biomedical Research Foundations (Academy of Athens), Human Umbilical Artery specimens were tested. The ulterior purpose is to use HUA as grafts. Mesenchymal Stem Cells cultivated in decellularized HUA in incubator for one day and in the perfusion bioreactor for five days. The main parameter controlled was the flow rate, depending on the shear stress of the fluid (culture medium) flow. The results were significant. A successful recellularization was accomplished with a high cell density on the lumen of arteries. The results are depicted implementing Hematoxylin & Eosin Stain and confocal microscopy techniques processed at the BRFAA. The second experimental series consisted of ten-layered Polycaprolactone-Carbon Nanotubes scaffolds, manufactured in our Laboratory using a prototype Electrospinning unit. PCL-CNT scaffolds are considered promising tool for osteogenesis. Perfusion bioreactor MSC cultures, halting at one and three days, were compared to static cultures applying MTT assay. A strong indication was deducted that perfusion is quite more efficient to the proliferation of MSC, than in the case of static culture. Unfortunately, the time margins were narrow enough to be an obstacle to a more thorough investigation of the researches above. More days of culture and repetitive experiments are considered mandatory for a complete investigation. Finally, the next step should be the investigation of MSC differentiation to either endothelial cells, or osteocytes, regarding HUA and PCL-CNT scaffolds, respectively.
- ItemOpen AccessA new medical decision support system (MDSS) for the diagnosis of coronary artery disease (CAD) using fuzzy cognitive maps (FCM)
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Αποστολόπουλος, Ιωάννης; Apostolopoulos, IoannisCardiovascular Diseases, which the Coronary Artery Disease is part of, are the leading cause of death worldwide, despite the progress made in prognosis and treatment. Accurate, noninvasive diagnosis of the disease is impossible to achieve, due to the constrained accuracy of the diagnostic tests and the complexity of the parameters affecting the risk of suffering from the disease. Hence, humans are forced to undergo the invasive way of diagnosis and treatment, the Coronary Angiography. It is proven that 30% to 50% of the candidates that undergo the Coronary Angiography were healthy. This is the reason why a lot of research has been going on regarding the prognosis and the automatic diagnosis of CAD. Recently, several approaches have been employed, reclaiming the advances in Data Mining and Machine Learning of the past years. In this work, we approach the problem with Fuzzy Modelling, Machine Learning, and Deep Learning approaches. Based on the Diploma Thesis for the BSc Degree of Electrical and Computer Engineering, an improved method of modelling Coronary Artery Disease with Fuzzy Cognitive Maps is presented in this thesis. Next, Machine Learning and Deep Learning methods are examined. For Machine Learning, state-of the art classifiers are employed to perform the diagnosis. Utilizing the Myocardial Perfusion images from the database, Deep Learning with Convolutional Neural Networks is examined for the classification of the medical images. The state-of-the-art Convolutional Neural Network, called Virtual Geometry Group (VGG) was employed to perform the classification task. The three methods are compared, and conclusions regarding their advantages and drawbacks are discussed.
- ItemOpen AccessAn android app for real-time monitoring and analysis of electrodermal activity
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Πετράι, Ελισσαίος; Μεγαλοικονόμου, Βασίλειος; Λυμπερόπουλος, Δημήτριος; Παυλίδης, Γεώργιος; Petrai, ElissaiosA device and an android application were developed for real time monitoring of electrodermal activity in order to assess the skin hydration. Using Electrodermal Activity (EDA), bioelectric attributes of the skin can be indicated, which are either passive or active. The preferred method for assessing skin hydration was based on impedance measurement. The human skin has a certain impedance or resistance; It can be easily described and simulated using a substitute electric circuit. The decision for measuring skin impedance for assessing skin moisture, was based on a number of reports which suggest that skin impedance provides a useful non-invasive method for assessing skin hydration. The device circuit for measuring skin impedance was designed after an extensive research in the literature and implemented in our laboratory. For monitoring the measurement values and for the portability of the device, an android application was developed and connected via Bluetooth Low Energy protocol to the device. Both the device and application are designed in a way that the user can customize them to their own needs. An experiment was carefully designed to test the device and apply it in real conditions for the intended use. The experiment included the indirect comparison of two commercially available moisturizers applied to volunteers using a specified protocol, as well as the real time assessment of skin hydration based on the measurement skin impedance values from our device. The results for our experiments were close to the expected ones. Both the device and application can be further improved, but even in their initial form (prototype), adhering to the protocol, they can bring valid results.
- ItemOpen AccessAnalysis and design of a remote beehive surveillance and monitoring information system
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Σιακαμπέτη, Ιωάννα; Siakampeti, IoannaHuman and Computer Interaction is now a new field in the academic world but provides humans with new technologies and solutions to their everyday problems. The aim of this dissertation is the creation of an application in a prototype form that will help beekeepers in their everyday activities. Someone can be a beekeeper as a professional or as an amateur but in both situations the activities that one has to do are many and complex. They must constantly have their attention to their beehives and the behavior of the bees. In order to do this they must be informed about the weather, the climatic conditions, the areas that are appropriate to put in their apiaries etc. This study starts with a literature review that concerns relevant bibliography and underlying technologies like Internet of Things, sensors and remote monitoring systems. Then, it continues with the methodological analysis that is based on the principles of the HCI field. It describes the methodological approach of the interview and observation, the reason for choosing mock-ups and the basic heuristics in the user centered design. The chapter 4 is the description of the final prototype while the evaluation process is presented in chapter 5. The thesis ends with the conclusion and proposal for future work in Chapter 6.
- ItemOpen AccessAnalysis of deformable bodies during the gait cycle
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Κοκκόρη, Γιαννούλα; Δεληγιάννη, Δέσποινα; Δεληγιάννη, Δέσποινα; Μουστάκας, Κωνσταντίνος; Λαμπέας, Γεώργιος; Kokkori, GiannoulaTotal hip replacement (THR) is one of the most successful orthopedic interventions with over 100 years of operative history. People who suffer from severe chronic pain diseases such as osteoarthritis have various disabilities in daily routine. Replacing the hip joint with this effective procedure, the patients relieve the pain and get back to normal life. The continuous development of technologies and computers provide us various ways to perform a preoperative planning for the THR procedure. This thesis analyzes the models of the human femur bone and the assembly of the femur bone with the THR implant. The study focuses on the application of loads during the gait cycle and the finite element analysis of the models. The relative results relate to the mechanical behavior of the models and more specifically the Von Mises stresses and deformations due to the applied loads. In detail: Chapter 1 provides a brief reference to total hip arthroplasty, the historical background, the surgical procedure and the risks that it poses as well as the anatomy of the femur bone and the biomechanics of the hip joint that is of major importance for the static analysis of it. Still there is a description of the gait cycle and the stress shielding phenomenon that occurs after THR during walking. Chapter 2 refers to the biomaterials that are used in total hip replacement. Various materials such as metals, polymers and ceramics are used in the different components of the THR implant and the main of them are described in detail. Chapter 3 deals with the basic mechanical theory that every structure is based on. Stresses, strains and the Hooke’s law are presented with emphasis in 3D case. The Von Mises criterion and the static analysis are also mentioned. Chapter 4 analyzes the finite element method that is used in the thesis. A brief historical overview and the basic steps of the analysis (preprocessing, processing and postprocessing) are presented. Then, as the processing phase is performed by the computer, the sequence of mathematical equations and their way of solution was also mentioned. Finally, there is a reference in the finite element software programs and an analytic description of the relative steps in ANSYS software. Chapter 5 presents the simulation process. First, a section was made in the appropriate point of the femoral bone and the THR implant was inserted in the right position with the assistance of the CAD software package SolidWorks. Then, the ANSYS software generated the appropriate mesh for each model and applying the proper material properties, loading and boundary conditions we proceeded to the analysis. Chapter 6 finally shows the finite element analysis results of the femoral assemblies as obtained using the ANSYS software program. The results refer to the relative Von Mises stresses, Von Mises strains and deformations due to the applied loads of the gait cycle. Finally, the conclusions and the prospects for further studies are presented. Regarding on the results, we concluded that on the calcar region of interest occurs the “stress shielding” phenomenon where the implant “shields” the bone and does not allow it to grow properly. Moreover, after the implantation we observed that the maximum stress occurs on the neck of the hip implant for all studied the materials. Finally, Ti alloy and Co-Cr alloy on anterior and stem tip regions represent similar results and thus have little effect on the femur bone.
- ItemOpen AccessApplications of holography in optogenetics
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Τσάκας, Αναστάσιος; Ευάγγελος, Δερματάς; Tsakas, AnastasiosThe brain is a complex organ where little is known about how it works and its diseases and treatment. Many efforts have been made by scientists to unlock the mysteries surrounding the brain. Various techniques have been developed for the study and treatment of the brain but they are mostly invasive and do not provide enough information. However, a new technique, Optogenetics, is a very promising method for studying the brain. Specifically, by using light Optogenetics is able to examine how specific neural networks work which are responsible for functions such as movement, information processing and emotions. By introducing appropriate photosensitive opsins into the neurons, we can activate or deactivate the function of a neuron at will and then observe the effect it has on the brain and on the overall organism. The Optogenetics method consists of four basic steps: selection of a light-sensitive protein which is appropriate for the desired optogenetic experiment. Afterwards, the light-sensitive protein has to be expressed genetically into the neuron. A viral vector is the most commonly used method for gene expression. Then, an illumination technique is selected. There are various ways to illuminate a neuron for photoactivation, such as beam directing, galvo-based scanning, direct projection and holographic projection with each illumination technique having its advantages and disadvantages. Finally, an optical readout technique is selected for the collection of the information that is related with the changes in the function of the brain and in the behavior of the animal model.
- ItemOpen AccessApplying big data in pharmaceutical industry : development, strategy and administration
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Μουσουλέας, Ιωάννης; Παυλίδης, Γεώργιος; Παυλίδης, Γεώργιος; Μεγαλοοικονόμου, Βασίλειος; Mousouleas, IoannisThe pharmaceutical industry today has many challenges. Each company must promote research and be a pioneer in new drugs developments, minimize production costs, be able to control how its customers face their products. In order to do this, it should pay attention to every information by collecting from each data source that it can be useful to it. At the same time, data processing and management should be done at the minimum cost and maximize their value. This leads to an attempt by pharmaceutical companies to integrate new types of data and sources from around the globe leading to processes of data collection from unstructured data. Pharmacy companies led to procedures for collecting and processing Big Data, use procedures from artificial intelligence, data mining, text mining, sentiment analysis, etc. In our case, we have studied the introduction of big data in the pharmaceutical industry and the impact it may have on company decisions as well as on their strategy. We also studied – designed and developed three methods for the use of data mining tools that can drive industries into new decisions and policy. The tools relate to methodology for collecting and compiling data to control drug prices, use of sentiment analysis with social networking data to enable companies to design a marketing policy and research decisions, and use of classification algorithms, creating new methods for new efficient products. Thus, we put forward a method for Greek hospitals and pharmaceutical industries to collect information from different sources to facilitate their analysis. In this case data was collected by the Greek Ministry of Health, by assessing its official website, where files with drug prices are listed, and all relative information was collected on a single source. Our aim was to create a file containing only the useful information, such as the price for a selected drug, the hospital that made an order, the quantity of the order and the total cost. The selected drugs were atezolizumab, avelumab, ipilimumab for the two major hospitals of Patras, Agios Andreas and the Rio Hospital. This procedure could facilitate decision making on drug prices and transparency in relation to sales of medicines in hospitals. In the second case, we developed an application to determine the reputation of the pharmaceuticals based on public’s opinion, where companies through it can make decisions about improvements and the promotion of their products. The application uses sentiment analysis and text mining in an organized way for the factors the company wishes to study in relation to its products. Its data is derived from the big data coming from twitter. The application can lead to decisions that are significant in relation to the marketing of each company while improving product data. In the third case we present the use of classification tools, using well-known software such as WEKA, where their direct application to medical data can facilitate pharmaceutical companies develop new technologies and methods to help patients and produce more efficient products. The above three cases show the use of methodologies on data that can directly influence the decisions of pharmaceutical companies.
- ItemOpen AccessArtificial intelligence applications in medical imaging
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Θεοδωρής, Νικόλαος; Theodoris, NikolaosBreast cancer is the number one cancer that affects women and cause millions of deaths. Modern technology and engineering have developed many CAD systems to increase the survival rate of women with breast tumors and help the specialists to diagnose more easily the illness. This thesis is about the development of a computer aided diagnosis system that is responsible for the detection and the segmentation of breast cancer in digital mammograms. Initially, the breast cancer is analyzed in terms of its anatomy and the reasons that cause it. More specifically, the symptoms and the classification of breast cancer is crucial in our work. There are also many ways to diagnose this illness and they are described. In the next chapter, the basics of neural networks, machine learning and convolutional neural networks are explained. CAD systems use neural networks for classification, detection, or segmentation of breast cancer. For this reason, we describe how they work. A state-of-the-art report is also done for the neural networks that are used in medical imaging of breast cancer. Moreover, my UNet development for breast cancer segmentation of InBreast database is presented. First, the software and the materials that I used to create the system for breast imaging are referred. The techniques of image preprocessing, data augmentation and the UNet architecture are described with the analysis of the Python code. Furthermore, I compare the results of the UNet development from different experiments, where I changes the hyperparameters. The prediction results of breast cancer binary masks and the evaluation metrics that I have used to compare them are presented in this chapter. Finally, a summary of the conclusions that emerged from this research is made and some possible future modifications and ideas are explored.
- ItemOpen AccessBiomechanical study of large blood vessels using fluid - structure interaction simulations
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Ζησιμόπουλος, Σπήλιος; Zisimopoulos, SpiliosCardiovascular diseases are one of the major causes of mortality in the modern world. A possible method of treating such conditions is the replacement of a blood vessel’s pathological segment with an artificial one. Dacron and ePTFE are the main materials currently used in the manufacturing of artificial vessels. A great disadvantage of theirs is their low compliance in relation to biological vessels, which may lead to additional pathologies. State-of-the-art research focuses on the production of artificial vessels with increased compliance. The manufacturing of artificial vases from PCL using the electrospinning method is a potential solution. In this Master Thesis we performed a biomechanical study of blood vessels, covering the various theoretical aspects involved in cardiovascular problems. Then, a combined experimental and FEM computational study was carried out. A PCL/PVA carotid caliber vessel was produced by the method of electrospinning and tested under various operating conditions using a physiological flow apparatus. A finite element FSI model was developed in order to simulate and assess the experimental results, as well as to predict the vascular compliance and other mechanical properties. The simulations were performed at pressure intervals between 0 to 150 mmHg. For the modeling of the PCL vessel wall, various elastic and hyperelastic material models were examined. All material models showed errors less than 16% when comparing the calculated vascular compliance to the experimentally measured one and less than 7% compared with bibliographic values at the physiological 80-125 mmHg pressure interval. In this interval, the FEM calculated compliance ranged from 4.7 to 5.1 %/100 mmHg×104, while the experimental one had a mean value of 4.4%/100 mmHg×104 with the corresponding bibliographic value being 5 %/100 mmHg×104. Moreover, at least one material model had an error less than 12% for operating pressures up to 150 mmHg. In addition, the maximum values of the 1st principal stress at 100 kPa and fluid velocity at 0.93 m/s were validated by relevant studies for carotid caliber arteries. The importance of deformable wall analysis provided by the FSI methods was highlighted by the rigid model’s wall shear stress overestimation up to 45% compared to compliant models. In conclusion, the FEM simulations showed that the FSI modelling can yield more realistic results than rigid wall simulations. Although no wall material model can fully describe the PCL vessel’s response for every pressure interval, a hyperelastic constitutive model such as an Ogden material is more appropriate for the modelling of arterial walls, compared to elastic materials. On the other hand, a linear elastic material model is better suited for less compliant artificial vessels. Overall, the FEM model developed in this study was validated by both experimental and bibliographic data and has the potential to be used in predictive models in the future.
- ItemOpen AccessBuilding an operating system from scratch and studying the ethical aspects of its operations
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Ασημακοπούλου, Ραφαηλία; Asimakopoulou, RafailiaWhat happens when we press the power button on our computer? How does the operating system work with the computer hardware? We come to answer these questions through this thesis. We potentially want an IT student to be able to edit and run one’s software locally on one’s computer based on the theory described within the thesis and the code provided on a Github repository. We know that the first computers were designed as early as the middle of the 18th century. Initially, they were massive and their use served companies for large data processing, but today they are portable and pervasive among us. They intervene in our lives and shape them almost imperceptibly. They became mass communication tools, mass computing media, and extensions of human needs. Electronic product designers, today, are considered architects of experiences, which regard quality as ease of use for the novice user. Instead, the modern digital age calls us to answer a new question, how much liberty does the user have in the digital environment with which one interacts? The evolu- tion of open source software is another fact. The creation of small portable virtual machines that run on old and new hardware even on the internet make room for the entry of the web3 era, and in combination with the development of personal computers, following the ”do-it-yourself” movement, promise to contribute to more secure use of our data.
- ItemOpen AccessBusiness intelligence systems in the field of epidemiology. The case of Greece
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Καραγεώργου, Μαρίνα; Παυλίδης, Γεώργιος; Λυκοθανάσης, Σπυρίδων; Μεγαλοοικονόμου, Βασίλειος; Karageorgou, MarinaBy epidemiological surveillance we mean the systematic collection, analysis and interpretation of epidemiological data for the purpose of public health measures. The need for effective epidemiological surveillance in a country arises from the need for accurate and specific knowledge of the epidemiology of diseases in its population, early detection of epidemic outbreaks, identification of individual cases of disease indicating intervention in the patient's environment, evaluation of public health interventions / strategies - monitoring the progress of the objectives set and, finally, understanding the health problems of the population and their natural course. In Greece, epidemiological surveillance is carried out by the Center for Disease Control and Prevention (HCDCP) through the Basic System of Epidemiological Surveillance for infectious diseases, which includes, inter alia, the mandatory declaration network, which consists of the Compulsory Statement System Disease and the System of Laboratory Declaration. The System of Compulsory Diseases is the most widespread system for the detection and surveillance of infectious diseases, which has been operating for many years in most countries of the world. It is based on the existence of an institutional framework that obliges doctors to declare specific infectious diseases. But with this system, there is a significant sign of diseases, which for many diseases reaches 80-90%, as well as a significant delay in the declaration of diseases. Given that public health risks are not static, instead of changing over time, periodic review of the priorities of epidemiological surveillance in countries is considered imperative. Technological developments and the advancement of scientific knowledge also influence the surveillance strategy or methods. Progress in the field of information technology opens new possibilities in the field of surveillance, reducing its costs and increasing its scope. In this diploma thesis we studied the Business Intelligence systems in the field of Epidemiology. Technological development in the field of computers provides the appropriate tools for the automation of the system, with the aim of direct recording of cases, the correct processing of data and the extraction of reports, the data of which will be addressed to specific users at a time. The development of the system is done with the SQL Server Management Studio and Microsoft Power BI tools, which are used to create a database of logs and to connect to a Data Warehouse as well as to create reports respectively. The first chapter discusses the concerns about the existence of an automated Epidemiological Surveillance system, which led to the purpose of writing this diplomatic and finally elucidated the sources of the literature. The second chapter introduces the term e-Health, which covers a wide range of information and communication technology tools aimed at better prevention, diagnosis, treatment, monitoring and management of health and lifestyle. Some of these applications are also mentioned, such as Electronic Health Record, Computerized physician order entry, ePrescription, Clinical Decision Support System, Telemedicine, Health Knowledge Management, Virtual Healthcare teams, and mHealth. Chapter three presents Business Intelligence (BI), a technology-driven process for analyzing data and presenting informational information that helps business executives, business managers and other end-users make more informed business decisions. In addition, it is analyzed in its core parts, such as Decision Support Systems, Online Analytical Processing, Executive Information Systems and Data Warehouses, through a historical review, and several uses in the field of healthcare. The fourth chapter concerns the study of two cases in Poland and Stockholm, where "intelligent" technologies were used in the field of epidemiology. More specifically, in the case of Poland, control panels of epidemiological data are produced, which are derived from various sources and, in the case of Stockholm, a system is developed whereby the GIS software achieves the storage, analysis and visualization of environmental and epidemiological data spatially. The fifth chapter defines the existing Epidemiological Surveillance System in Greece, which is carried out mainly through three surveillance systems: Mandatory Notification System, Primary Health Care Sentinel Surveillance System, and Laboratory Surveillance System. The various shortcomings and possible failures of this system are also highlighted. The sixth chapter describes the practical part of the thesis. Specifically, within the framework of the development of the Epidemiology Information System, we conducted a feasibility study, analysis of systems and requirements, design and implementation of the Data Warehouse. The need for a platform to facilitate the recording, processing and extraction of data has been identified. Finally, we have reached a number of conclusions regarding the benefits of an Epidemiological Professional Intelligence System. In particular, the reduction of time and errors in the registration and processing of data due to the human factor is an important achievement. Also, the cost reduction due to the reduction of the required man-hours should be taken into account in the advantages of the system. Notification of possible outbreaks could be a very useful tool in the hands of doctors and HCDCP. In general, the need to develop and improve Business Intelligence systems in the field of epidemiology is becoming increasingly imperative, with positive results.
- ItemOpen AccessComputational investigation of two phase flow in artery bifurcation
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Κατσιγιάλου, Αικατερίνη; Μάργαρης, Διονύσιος; Μαυρίλας, Δημοσθένης; Αθανασίου, Γεώργιος; Katsigialou, AikateriniIn the present study a computational investigation of two phase flow in an artery bifurcation was contacted, using Computational Fluid Dynamics (CFD). In particular, we proposed an eulerian granular two phase model for the modeling of blood flow in an normal and a corresponding diseased left coronary artery bifurcation. Plasma was modeled as the continuous and liquid phase and Red Blood Cells (RBCs) as the granular and dispersed phase. The effect of the percentage of stenosis of 55%, 65% and 75% of a lesion type (1,0,1) of Medina Classification was investigated for the stage of rest and the stage of hyperemia. Also, the effect of the bifurcation angle from 50° to 70° and 90° was studied. A total number of 28 simulations were carried out. Results between single phase and two phase modeling of blood are compared in order to evaluate the proposed two phase model of blood flow. The geometric models of left coronary bifurcation were designed in the commercial program SolidWorks and the CFD simulations were carried out with the use of the commercial program Ansys Fluent (v16). In the first chapter, we present some basic biological knowledge about the cardiovascular system, the arteries and specifically coronary arteries, and the left coronary artery bifurcation. We also explain what atherosclerosis is and we discuss about significant hemodynamic factors that characterize the blood flow in left coronary artery bifurcation: the Wall Shear Stress (WSS) and the Fractional Flow Reserve (FFR). In the second chapter we can see the exact dimensions of the geometrical model of the ideal left coronary bifurcation that was created in SolidWorks. Also, we discuss about the Medina classification of coronary bifurcation lesions, and we explain the reason for selecting to study the type (1,0,1). Finally we can see the geometries of all created models. At the third chapter, we discuss the theory that was applied for the CFD investigation. First, we explain basic fluid dynamics that were used for single phase modeling of blood, and then we analyze the multiphase (two phase) modeling approach. The multiphase modeling approach was achieved using the eulerian granular model of Ansys' Fluent, considering plasma as continuous liquid and phase and RBCs as dispersed and granular phase. Then, we explain the meaning of the Reynolds Number, and we present the theory of k-ε dispersed turbulence model that was used in the case of multiphase modeling of blood flow at the stage of hyperemia. At the fourth chapter, we mention the assumptions that we made in order to solve our problem. We assumed blood as newtonian fluid in single phase modeling, and plasma phase also newtonian in the case of multiphase modeling. We also assumed homogenous, isotropic and rigid artery walls. The pressure at the bifurcation outlets, the left circumflex artery (LCX) and the left anterior descending artery (LAD), were assumed constant. Then, we mention the boundary conditions and the solution methods that were used. In the fifth chapter, we present the meshing methods that we used to grid our models. We used a patch confirming method of tetrahedrons with an inflation of the wall. Methods of measuring the quality of grid's elements, like orthogonal quality, skewness and aspect ratio are also presented. First we accomplish a good element quality and then we verify that the results of this study are independent of the mesh and element size, conducting a mesh independence study. At the sixth chapter, the results of the present study are represented. First we validate the methods used in this study. Then, results about the effect of the percentage of stenosis of left coronary artery bifurcation and the effect of the change in bifurcation angle are discussed. Results are represented comparatively between multiphase and single phase modeling of blood flow. Analysis was conducted for important hemodynamic factors of mass flow rate, velocity, pressure, WSS, FFR, RBCs granular temperature and RBCs volume fraction. These factors are also compared between the stages of rest and hyperemia. Finally further investigation is carried out using the k-ε turbulence model of Ansys' Fluent in single phase modeling of blood at the left coronary bifurcation at the stage of hyperemia. Concluding, in the seventh chapter, similarities are found between the two phase modeling of blood flow in left coronary artery and the single phase modeling. The percentage of stenosis above 55% in the diseased coronary bifurcation seems to play an important role in significant hemodynamic factors mentioned before, while the increase of bifurcation angle from 50° to 70° and to 90° for 75% stenosis does not affect our results. Results of the multiphase modeling of blood also indicate a good representation of blood's shear thinning property and Fareus-Lindqvist effect. Further experimental investigation is suggested to evaluate the precision of this result. Finally, for single phase modeling of blood in the artery bifurcation at the stage of hyperemia, a turbulence model may be necessary, as Reynolds Number is over 500. For this reason also, an experiment extent of this work is therefore suggested.
- ItemOpen AccessContent based medical image retrieval utilizing sparse coding techniques
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Κουτσονικολή, Αικατερίνη; Μπερμπερίδης, Κωνσταντίνος; Μουστάκας, Κωνσταντίνος; Κωσταρίδου, Λένα; Koutsonikoli, AikateriniIn this study, we propose a novel dictionary learning based multi-level clustering method for content based medical image retrieval. We suggest innovation compared to previous works as the center of the information-part of the image is calculated, the entropy of pixel intensity values is used as feature in the regions of the image and the K-SVD method is used to generate dictionaries for each cluster. Then, the process is repeated in each cluster in order to form sub-clusters. The performance of the proposed method is evaluated using the ImageCLEF Medical Dataset. Our feature extraction methods use image region partitionings that aim either at providing rotation and translation invariant CBIR, or at taking into consideration the rich information usually available at the center of each medical images. Afterwards, we compare the performance among the different feature extraction methods as well as the pure k-means algorithm and present our findings.
- ItemOpen AccessCurrent insight into kidney fibrosis and the role of EGFR as a potential therapeutic target
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)(2023-02-17) Κουκουτσίδη, Παναγιώτα; Koukoutsidi, PanagiotaThe epidermal growth factor receptor (EGFR) is a member of the ErbB/HER family of receptors and its expression can be detected in almost every tissue of the human body. EGFR is a receptor tyrosine kinase with 11 known ligands. Over the years, researchers have conducted numerous experiments to determine the structure and the receptor’s potential implications in human physiology and pathophysiology. Renal fibrosis is the final common manifestation of a wide variety of chronic kidney diseases such as diabetic and obstructive nephropathy. Chronic kidney disease carries significant impact for the patients’ lives and its management poses a challenge for the physicians and the healthcare systems globally, especially due to its high prevalence. Τhis thesis presents a description of the normal ECM, the fibrotic process and its major mediators. Hereupon and based on the current literature, researchers argue that sustained or dysregulated activation of EGFR leads to renal fibrosis via the following mechanisms: a) activation and increased expression of TGF-β1, b) arrest of epithelial cells in the G2/M phase of cell cycle, and c) excessive production of cytokines and chemokines. Furthermore, the role of two EGFR ligands, amphiregulin and heparin-binding EGF, in the fibrotic process is explained. EGFR could be used as a therapeutic target for the treatment of chronic kidney disease. Nevertheless, extensive research has yet to be conducted to elucidate the exact molecular pathways implicated in EGFR activation. The aim of the thesis is to lay emphasis on the structural and functional complexity of the kidney and to explain the most prominent information about EGFR along with its role in chronic kidney disease. It also aims to highlight the absence of clinical trials that test specific anti EGFR agents as potential therapeutic agents for renal fibrosis. The interdisciplinary approach that was followed is essential for every scientist who wishes to dive into research and address elaborate clinical challenges.
- ItemOpen AccessDeep learning in medical image analysis : a comparative analysis of multi-modal brain-MRI segmentation with 3D deep neural networks
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Αδάλογλου, Νικόλαος; Δερματάς, Ευάγγελος; Δερματάς, Ευάγγελος; Adaloglou, NikolaosVolumetric segmentation in magnetic resonance images is mandatory for the diagnosis, monitoring, and treatment planning. Manual practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human factor. Automated segmentation can save physicians time and provide an accurate reproducible solution for further analysis. In this thesis, automated brain segmentation from multi-modal 3D magnetic resonance images (MRIs) is studied. An extensive comparative analysis of state-of-the-art 3D deep neural networks for brain sub-region segmentation is performed. We start by describing the fundamentals of MR Imaging because it is crucial to understand your input data to train a deep architecture. Then, we provide the reader with an overview of how deep learning works by extensively analyzing every component (layer) of a deep network. After we study the fields of magnetic resonance and deep learning separately, we attempt give a broader perspective of the intersection of this two fields with a different range of application of deep networks, from MR image reconstruction to medical image generation. Our work is focused on multi-modal brain segmentation. For our experiments, we used two common benchmark datasets from medical image challenges. Brain MR segmentation challenges aim to evaluate state-of-the-art methods for the segmentation of brain by providing a 3D MRI dataset with ground truth tumor segmentation labels annotated by physicians. In order to evaluate state-of-the-art 3D architectures, we briefly analyze the author’s approaches, as well as to provide the reader with an intuition behind the design choices. We perform a comparative analysis of the baseline architectures through extensive evaluations. The implemented networks were based on the specifications of the original papers. Finally, we discuss the reported results and provide future directions for implementing an open-source medical segmentation library in PyTorch along with data loaders of the most common medical MRI datasets. The goal is to produce a 3D deep learning library for medical imaging related tasks. We strongly believe in open and reproducible deep learning research. In order to reproduce our results, the code (alpha release) and materials of this thesis are available in https://github.com/black0017/MedicalZooPytorch
- ItemOpen AccessDesign and implementation an online lab exercises for teaching the development of client using Socket programming in C
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Talpur, Samia; Δενάζης, Σπυρίδων; Δενάζης, Σπυρίδων; Λαμπρόπουλος, ΚωνσταντίνοςThis thesis is aimed at getting familiarized with an online lab course at forge box which will introduce us to code statements for the server using socket programming in c so as to facilitate multiple clients. The purpose of this course is to provide the ease of usage in familiarizing the students with the linux environment in the class room, so that the basics of socket programming are inferred promptly dismissing the usual hassles related with newer software’s. It will also be of considerable assistance to the teacher as the theoretical and practical aspects would be connected via this course for an adequate setup to build a stronger base for socket programming.
- ItemOpen AccessDesign, development and evaluation of a chatbot that supports the interaction of a platform user
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Πούλιος, Ευάγγελος; Poulios, EvangelosThe current Thesis subject is: development of interactive dialog bot (chatbot) to support interaction between user and platform. To be more specific we chose to investigate the field of customer service, for the case of a hotel business. Customer service is an area where Artificial Intelligence (AI) can be utilized. That area is being examined regarding the adoption of new innovative technologies and the final delivered service quality. Today, the customer service industry tends to make good use of new AI technologies to offer an improved service benchmark but also to develop new approaches in the same direction. AI seems to be able to offer remarkable efforts and gradually replace the physical presence of man. The title of present Thesis was chosen as aim to approach a realistic field with the utilization of modern technologies and in particular AI. The title is specialized in the case of a hotel business, which is a characteristic example where software companies have already been employed. The final goal of the current work is to gain experience for handling technologies in fields where traditionally human resources have been employed and also to examine if these approaches can be the future of customer service. Among the ecosystem of the present dialog software frameworks, we decided to choose the option of Microsoft Bot Framework. Using that framework, we designed an application that can answer questions based on a knowledge base but also can collect details through a dialogue, between the hotel business customer and the dialog software, in order to inform the customer about the rooms’ availability. The knowledge base may contain questions - answers from the official list of frequently asked questions (FAQ) that the hotel has posted on its website, from other information that the hotel communicates with various ways on its official website, and finally from an already offered knowledge base from the framework, by selecting the friendly chitchat mode. The other part of the application, which informs about rooms’ availability, uses a language recognition service (LUIS) through a programming interface (API) to match the information entered by the customer with the pre-registered entities. Once the necessary information has been collected from the customer and verified through validation checks, it is forwarded to execute a query in a database designed to support the room reservation system. The final dialog application can replace the hotel business’s receptionist. Essentially, like a receptionist, it can answer questions about the hotel, as usually described in its official website, but also to execute queries in the room reservation system in order to inform the customer about rooms’ availability. The dialog application can be offered both on a hotel website and on social media channels, supported by the hotel. Below, we will describe the Thesis structure and we will end up with conclusions. In the First Chapter we introduce the categories of dialog software that exist today. We also mention basic terms that are necessary to understand the subject of conversational software. We continue with Chapter Two where we list four widely accepted frameworks used in the field of dialog software development. Of these, we provide more details about the Microsoft Bot Framework, which is our choice to use in current Thesis. In Chapter Three we analyze the utility of dialog software for the customer service industry. A typical case of this industry could be considered a hotel business. That was selected as a case for us to develop the dialog software application. In this chapter, we also define the goals but also the limits we have set for the desired functionality that our application will offer. Chapter Four is a brief guide to the steps we followed in order to create services that works with AI and are essential part of the basic architecture that we developed for the conversational software. Completing the Thesis in the Fifth Chapter we include the results for the design and development of the conversational software, while we also propose a heuristic evaluation methodology that can be applied to our application. We close the present work with Chapter Six where we cite a brief review. Summing up, aiming at this Thesis to develop a dialog software to serve/ support the interaction of a platform user, we conclude that this approach is quite promising. Existing frameworks, as the one that we have selected, enable the development of valuable software for services that until recently were available only by humans. During the writing of the Thesis, we became acquainted with the architecture that the development of a dialog software follows. The benefit of having an automated application for many business categories is undeniable. Such an approach, as AI is improving, will be able to offer increasingly better perspectives on conversational communication. Of course, as ourselves understand, a lot of effort is required to add additional features, in addition to the already classic features that are offered by dialog software. In any case, the general goal is to expand the functionality offered by the dialog application in order to cover a wider range of services. Specifically, in our case the examination of human speech as additional means of communication is certainly a subject of interest. That prospect is already being considered in the dialog software industry. This opportunity would certainly be a worthwhile approach to continue current work. And of course, this could be extended to support more than one spoken language.
- ItemOpen AccessDesign, implementation and evaluation of deconvolution methods in periodic biomedical signals
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Μακρυγιώργου, Δήμητρα; Δερματάς, Ευάγγελος; Δερματάς, Ευάγγελος; Μπερμπερίδης, Κωνσταντίνος; Σκόδρας, Αθανάσιος; Makrygiorgou, DimitraIn the present thesis, an extensive study and simulation of source separation methods for periodic biosignals are conducted. Biosignals nature and the emerged from their origin and acquisition characteristics make the separation a cumbersome task. A wide variety of source separation techniques have been implemented, each one based on a different assumption, relative to signals characteristics. Methods based on signals correlation, independency, and periodicity are studied. Specifically, fastICA, Infomax, JADE, AMUSE, SOBI, πCA, natural gradient methods and contrast function approaches for convolutive mixture and algorithm based on signal cyclostationarity are simulated and examined on both synthetic data and real biosignals. Finally, an approach, that takes into account the frequency components of the under investigation signals and its inputs are one channel data, is implemented and examined on real biosignals.
- ItemOpen AccessDevelopment of a new electrospinning setup for reinforced composite nanopolymer scaffolds for orthopedic applications
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Δουλγκέρογλου, Μελέτιος-Νικόλαος; Δεληγιάννη, Δέσποινα; Μαυρίλας, Δημοσθένης; Αθανασίου, Γεώργιος; Doulgkeroglou, Meletios-NikolaosDuring this thesis, we managed to install and establish a functional electrospinning unit in the Laboratory of Biomechanics and Biomedical Engineering in order to test polymers and substances that may be suitable for cellular scaffolds. More specifically, polycaprolactone (PCL) was used as the main polymer either dissolved in acetone or acetic acid 99%. The parameters that affect electrospinning were tested, such as concentration of polymer, distance between the collector and the needle etc and mixtures of polycaprolactone with chitosan and carbon nanotubes were electrospun successfully. For the qualitative evaluation of results, a scanning electron microscopy unit was used with which the architecture of the electrospun materials may be viewed and the diameter size of fibers can be calculated. Polycaprolactone electrospun samples along with samples that contained a blend of polycaprolactone with carbon nanaotubes, were mechanically tested. More specifically, tensile was applied in the specimens and the modulus of elasticity was estimated. Three methods were used for the estimation of the modulus of elasticity while statistical tests were performed in order to examine whether the different methods may be considered equivalent in a confidence level. The three methods contained the estimation of the modulus of elasticity by a regression line in the proportional limit of the specimens in the respective stress-strain curve, by the estimation of a tangent line in a specific point of the linear region and by the secant line that initiates from the start of axis and ends at the end of the linear region. Furthermore, since scaffolds may be considered two dimensional in the macrolevel, a novel method was implemented to create a porous three dimensional scaffold. A new type of a porous collector was designed and implemented based on the design of a commercial screen. Layers of polycaprolactone were electrospun and they were binded with a polymeric glue, leading to the creation of the three dimensional scaffold. The final part of this thesis, included cell cultures in selected materials. The main purpose was to evaluate proliferation and differentiation of the mesenchymal stem cells derived from umbilicar cord. The amount of total protein produced is an indicator of proliferation while the expression of the alkaline phosphatase is an indicator of differentiation of cells into osteoblasts. Cell cultures were observed during the third and seventh day while staining of cell nuclei was performed for visualization of their proliferation.
- ItemOpen AccessDevelopment of an online platform for real-time facial recognition
Τμήμα Ηλεκτρολ. Μηχαν. και Τεχνολ. Υπολογιστών (ΜΔΕ)Μίχος, Ευάγγελος; Michos, EvangelosΤhis Master Thesis targets upon extending the existing work of facial recognition algorithms regarding accuracy and efficiency and at proposing an online platform that can be used from the police forces towards effective real-time human recognition. The main features of the platform include a) inserting, editing and deleting user and criminal information and b) searching for criminals based on their picture through a livestream camera feed and identify them. The platform supports two different types of users: a) Police employees in the headquarters/precincts and b) Police Administrators, with a higher level of access and also responsible for database maintenance. Regarding the facial recognition algorithm, the approach uses and extends the Haar Cascade algorithm for real-time recognition, which is widely considered one for the most efficient and used algorithms for that cause. The development of the website was developed following the Model-View-Controller (MVC) architectural pattern, separating the platform in three different logical components. As for the criminal identification, it was made possible through image pattern recognition between the provided criminal’s image and snapshots of identified faces from the livestream feed. The platform includes a live feed section, accompanied with different options for video filters, enabling the user to select the best filter, depending on the relevant situation of the physical surroundings for better recognition results. At first, extensive research was conducted on both the state-of-the-art face recognition algorithm and the platforms dedicated for such causes and the capabilities and features they offer. After the initial research was conducted, the requirements definition followed, alongside with personas and scenarios development. Direct communication with relevant stakeholders who already have experience from these systems also occurred (e.g. through interviews). At some point, the platform was at an initial design stage (draft version), where the system should be meticulously studied and optimized towards its efficiency through a Heuristic Evaluation (HE) from usability experts and after gathering the relevant feedback and any other improvements to be implemented in the final version of the platform, the website entered its final stage. A questionnaire was disseminated in order to collect further feedback. After gathering the results and performing the necessary statistical analysis, we can proudly say that our approach was successful at its goals.