To define and setup of INCISIVE technology DevOps framework: following a hybrid centralized-decentralized approach and utilizing existing, mature big-data technologies for managing and processing heterogeneous data and enabling easy scaling and efficient management of available resources.
To implement the INCISIVE AI-driven models enhancing image processing and data analysis: focusing on improving sensitivity and specificity in diagnosis and assessment of cancer by applying novel deep learning structures such as Convolutional Neural Networks (CNNs), Long short-Term Memory Networks (LTSMs) and Deep Belief Networks (DBNs); improving the analysis capacity of low-cost medical imaging tools; fusing multimodal information across different modalities; and dreaming of new data combining the deep learning models with Generative Adversarial Networks (GANs); transforming image to medical reports; investigating disease evolution multiple hypothesis and enhancing interpretability towards explainable AI (XAI).
To define and incorporate analytics pipelines, enabling informed decisions: Analytics will include descriptive analysis accurately assessing the (cancer) conditions, predictive analysis enabling early identification of risks and improved prognosis and prescriptive analysis generating recommendations for supporting decision making. Analytics will be implemented in flexible and modular way and will be executed as flexible pipelines, exploiting the wealth of combined data.
To develop the INCISIVE user services and reporting tools: in the form of intuitive and highly interactive visualizations, addressing the needs of stakeholders for explainability of the analytics results, enabling the accurate detection, prediction and follow-up of cancer and allowing informed decisions. The services will also include data models-as-a-service that may run at a local level, mainly for research purposes.
To train the developed analytics and AI models: for improving their accuracy and performance, by exploiting available data sources (open and from partners) and data collected during the project.
To incorporate HPC services: called on-demand when a data-intensive analytic task (multi-omics) needs to take place. HPC-as-a-service is more affordable than “on-premise” HPC infrastructures and provide shorter turnaround times and flexibility, enhancing also collaboration between teams of the consortium, which are geographically dispersed. This will also require, code optimization activities, in order to exploit the full potential computational capabilities of the HPC infrastructure.