MARTEDI ore 16.30 nel mio ufficio (Stanza 156 - Viale Morgagni 65). Si raccomanda di inviare una mail per segnalare l'intenzione di partecipare al ricevimento.
TUESDAY 16.30 in my office (Room 156 - Viale Morgagni 65). It is strongly recommended to send an email to announce the intention to participate.
- Laurea Triennale conseguita nell'Aprile 2006 presso l'Università degli Studi di Firenze
- Laurea Magistrale conseguita nell'Ottobre 2008 presso l'Università degli Studi di Firenze
- Dottorato in “Ingegneria Informatica e dell’Automazione” conseguito nell'Aprile 2012 presso l'Università degli Studi di Firenze
- Dal 2012 a Novembre 2021: Ricercatore a Tempo Determinato presso l'Università degli Studi di Firenze
- Dal 1 Dicembre 2021: Professore Associato presso l'Università degli Studi di Firenze
Andrea Ceccarelli è Professore Associato in Informatica dal 2021 presso il Dipartimento di Matematica e Informatica dell'Università degli Studi di Firenze, e precedentemente è stato Ricercatore a Tempo Determinato presso la stessa Università. Ha conseguito la Laurea Triennale e Magistrale in Informativa rispettivamente nel 2006 e 2008. Nel 2012 ha conseguito il Dottorato in Ingegneria Informatica e dell'Automazione, affrontando tematiche relative alla progettazione e alla validazione sperimentale di sistemi critici, in differenti campi applicativi.
Ha partecipato con vari ruoli in numerosi progetti di ricerca regionali, nazionali ed europei. Attualmente è responsabile scientifico per il Dipartimento di Matematica e Informatica nei progetti PRIN 2022 FLEGREA e PRIN 2022 PNRR BREADCRUMBS, e nella partecipazione conto-terzi al progetto IES-Factory per conto del CINI (Consorzio Interuniversitario Nazionale per l’Informatica). In passato è stato responsabile scientifico per il Dipartimento di Matematica e Informatica nel progetto Regione Toscana POR CREO FESR 2020 SPaCe.
E' stato visiting researcher presso l'azienda Critical Software SA (Portogallo, 2013), la Universidade Federal de Alagoas (Brasile, 2014, 2016, 2017), la Universidade Estadual de Campinas (Brasile, 2015, 2019, 2022), e visiting student presso l'Università di Coimbra (Portogallo, 2010 e 2011).
Andrea Ceccarelli partecipa attivamente alle attività della comunità dei sistemi dependable e tolleranti ai guasti. È attualmente General e Program Co-Chair di SafeComp 2024, e Workshop Co-Chair a PRDC 2024. E' stato organizzatore di IWES 2023 a Firenze assieme ai Prof. A. Bondavalli ed E. Vicario, Program Co-Chair di IEEE SRDS 2017 e di LADC 2018, Doctoral Symposium Co-Chair @ ISSRE 2023, poster chair di EDCC 2019, Industry Track chair di LADC 2021, ed organizzatore di quattro edizioni della Summer School ARTISAN (2021-2024). Dal 2021 è membro dell’IFIP Working Group 10.4 on "Dependable Computing and Fault-Tolerance".
È membro di oltre 100 comitati di programma di conferenze e workshop Internazionali, ed è autore e co-autore di oltre 120 lavori tra conferenze Internazionali, workshop, capitoli di libro e riviste.
E' socio dello ex-spinoff accademico Resiltech S.r.l., di cui ha contribuito al riconoscimento come spin-off accademico nel 2012. Dal 2014, ha svolto ripetutamente il ruolo di esperto per la revisione di proposte di progetto per la Commissione Europea e per il framework Eureka.
I miei principali interessi di ricerca sono nel campo della progettazione e validazione di sistemi sicuri e affidabili. Alcuni degli argomenti che ho affrontato di recente, o su cui mi sto concentrando in questo periodo, sono:
1- Analisi di sicurezza di applicazioni basate su machine learning e intelligenza artificiale, con particolare attenzione al contesto di autonomous driving, e con il supporto di simulatori di guida autonoma.
2- Progettazione e analisi di soluzioni di anomaly detection (identificazione di anomalie), per la costruzione di sistemi di intrusion detection (per l' identificazione di tentativi di intrusioni) o per la rilevazione di fallimenti.
3- Definizione del modello di guasto di GPU, e di possibili soluzioni per la tolleranza ai guasti.
4 - Progettazione e validazione sperimentale (testing) di sistemi embedded, mediante l'applicazione di processi e pratiche secondo gli standard richiesti, con particolare attenzione al dominio ferroviario e automotive.
INFORMAZIONI PER TESI E TIROCINI: vedere la parte Inglese "Note", ovvero il link:
https://www.unifi.it/p-doc2-0-0-A-3f2b342f372e2b.html
Website of the Resilient Computing Lab: https://rcl.unifi.it/
Legenda
- Bachelor Degree in Computer Science: April 2006, Università degli Studi di Firenze
- Master Degree in Computer Science: October 2008, Università degli Studi di Firenze
- PhD in "Informatics and Automation Engineering”: April 2012, Università degli Studi di Firenze
- 2012 - 2021 Research Associate, Università degli Studi di Firenze
- 2021 -- onwards: Associate Professor, Università degli Studi di Firenze
Andrea Ceccarelli is Associate Professor in Computer Science at the Department of Mathematics and Informatics of the University of Florence (Italy), where he also received the Master degree (cum laude) in 2008, and the PhD in Informatics and Automation Engineering in 2012.
Andrea Ceccarelli has more than 15 years of experience in the design and assessment of dependable and secure systems and System-of-Systems, with a preference for experimental approaches. In the recent period, he started investigating security and safety of AI systems, with a focus on the analysis of their behavior when subject to anomalous inputs, and the consequent definition of countermeasures.
ACADEMIC SERVICES
Andrea Ceccarelli is General and Program Co-Chair at SafeComp 2024, that will be organized in Florence on September, 2024. He is also Workshop Co-Chair at PRDC 2023.
Previously, he organized IWES 2023 (Italian Workshop on Embedded Systems) in Florence, together with Prof. A. Bondavalli and E. Vicario, and he was TPC co-Chair of the Conferences SRDS 2017 (International Symposium on Reliable Distributed Systems), and LADC 2018 (Latin-America Symposium on Dependable Computing).
Also, he held the roles of Doctoral Symposium Co-Chair at ISSRE 2023, Poster Chair at EDCC 2019, Publication Chair at SRDS 2016 and SAFECOMP 2014, co-chair of LADC 2021 Industry Track, and he was a Steering Committee member of LADC.
Further, he co-organized five different workshops:
He was amongst the organizers the summer schools ARTISAN 2021 (virtual) and 2022, 2023 on "ARTISAN - Role and effects of ARTificial Intelligence in SecureApplicatioNs" togheter with O. Aktouf (Grenoble INP, France) and O. Jung (AIT, Austria), and he is currently organizing ARTISAN 2024.
He is regularly involved in the Program Committee of Conferences and Workshops: overall, he has been member of above 100 TPCs of conferences and workshops, including venues as IEEE/IFIP DSN, IEEE SRDS, IEEE ISSRE, AAAI, IEEE NCA.
Amongst main speaking activities, he was invited speaker at:
and panelist at:
ROLE AND PARTICIPATION IN RESEARCH PROJECTS
Andrea Ceccarelli participated to multiple research projects, with different roles and responsibilities, as reported below.
Technical coordinator of the unit in:
Work Package Leader in:
TECHNOLOGY TRANSFER AND PROJECTS WITH INDUSTRIES
Starting 2009, Andrea Ceccarelli is a partner of the company Resiltech SRL, which has been Academic Spinoff of the University of Florence. Resiltech operates in the area of design, Verification, Validation and assessment of critical systems, with main focus in the automotive, railway and industrial automation domain. It counts approximately 35 full time employees and four premises in Italy.
He regularly serves as expert for the evaluation of H2020 project proposals for the European Commission (2014, 2016 -- 2023), and for the evaluation of project proposals for the Eureka network (2019 -- 2023).
He is co-Inventor of the Italian Patent: Brevetto italiano 102015000072477, Methods and apparatus for resilient time signalling (N.PCT/IB2016/056768).
VISITING POSITIONS
Andrea Ceccarelli has been visiting researcher at: i) 2022 (two weeks), 2019 (three weeks), 2015 (two months): Universidade Estadual de Campinas (San Paolo, Brasil); ii) 2017 (three weeks), 2016 (1 month), 2014 (two months): Universidade Federal de Alagoas (Alagoas, Brasil); iii) 2013 (four months): Critical Software S.A., Coimbra, Portugal; iv) 2010-2011 (three months, Visiting Student during PhD): University of Coimbra, Coimbra, Portugal.
MAIN NATIONAL AND INTERNATIONAL ACKNOWLEDGMENTS
2019- Best Experience Report paper at LADC 2019
2020- Best Research Paper Nominees at ISSRE 2020
2021- Member of the IFIP WG10.4 on Dependable Computing and Fault Tolerance
My main research interests span on the broad topics of the design and assessment of safe, secure and resilient systems. Some research activities that I have been working on in the recent period are:
1- Study and evaluation of the impact of Machine Learning and Artificial Intelligence in safety-critical systems, and especially in autonomous driving systems and applications. For such studies, I am recently relying on the support of simulators for autonomous driving.
2- Definition, implementation and evaluation of anomaly detection solutions for secure and dependable system. Anomalies are deviations from the expected behaviours. They can be useful for the timely identification of attacks and failures: for example, they are exploited to build intrusion detection systems.
3- Analysis of GPU fault modes and of possible fault-tolerant approaches.
4- Design and evaluation (and especially experimental evaluation, i.e., testing) of embedded systems, in compliance with the requirements of safety standards, with a specific attention to the railway and automotive domains.
For THESIS assignments: see Note
INFORMATION FOR THESIS
If you are interested in my research subjects, and willing to work hard, you are very welcome to contact me for a Thesis topic. I will do my best to assign you an interesting topic and attentively follow you during its development.
If interested in a Thesis, you are strongly encouraged to read the information below.
BACHELOR THESIS
You are encouraged to mail me for an appointment.
MASTER THESIS
A successful Master Thesis requires a relevant body of work, a considerable amount of time, and a productive interaction with the supervisor. To reduce the risk of unpleasant experiences, please take under consideration the following mandatory conditions:
- it is requested that the student is knowledgeable with the subjects of one of my courses. In fact, the thesis topic shall deal with such subjects. If the student does not have appropriate knowledge of such subjects, the first activity that I will request is to acquire and assess the missing competences.
- it is requested that the candidate is available for periodical physical meetings (no remote meetings).
- if the candidate uses ChatGPT or similar text generation tools, the generated parts need to be clearly declared, togheter with the questions that were asked. If such text generation tools are used without proper notification, I will withdraw my availability as supervisor.
PROSPECTIVE PHD STUDENTS
As this requires a 3-year research plan, your involvement must be discussed with proper care. It is important that we reason attentively to identify the research subjects that are of interests for you. You are encouraged to contact me for information and to draft a prospective research plan.
TOPICS
You can find some Thesis proposals on the website of the Resilient Computing Lab: https://rcl.unifi.it , section "Thesis Proposals". Some of them are also reported below. As the list may not be updated too frequently, it is better to mail me for an appointment, so that I can present you the latest availabilities.
Safety of machine learning solutions. Machine learning is undeniably an enabling technology in several domains, for example it is at the foundation of autonomous driving. However, unproper (unsafe) behaviour of solutions based on machine learning may lead to dangerous consequences. Many Theses initiatives can be identified in this direction, especially using the autonomous driving as a reference domain, and that may span from the definition of mitigation strategies of possible unsafe behaviours to the comparison of solutions or to their representation and assessment through simulators.
Unsafe object detectors for safe autonomous driving. The purpose is to explore mechanisms to achieve safe autonomous driving in the presence of (unsafe) object detectors. Autonomous driving relies heavily on the output produced by object detectors, for example for trajectory planning. Despite the object detectors may misdetect objects, it is required that the driving task is safe, avoiding hazardous manouvers that may lead to accident. Multiple initiatives can be identified in this direction, especially focused on understanding the impact that misdetection may have to the trajectory planning taks, and architectural or algorithmical solutions to mitigate the impact of misdetection.
Object criticality to improve safety of trajectory planning. Object detection in autonomous driving consists in perceiving and locating instances of objects in multi-dimensional data, such as images or lidar scans. Very recently, multiple works are proposing to evaluate object detectors by measuring their ability to detect the objects that are most likely to interfere with the driving task. Detectors are then ranked according to their ability to detect the most relevant objects, rather than the highest number of objects. However there is little evidence so far that the relevance of predicted object may contribute to the safety and reliability improvement of the driving task. For this reason, an additional parameter, the predicted criticality of an object, has been proposed and means to compute it has been implemented. Predicted criticality has been used in conjunction with prediction confidence to facilitate the task of trajectory planning. However, a proper configuration of such parameters require additional studies. Starting from available results and implementations, the Student is requested to experiment with the above, to identify proper configurations that maximizes performances of trajectory planners.
On the effect of prediction errors in cyber-physical systems. It is well-known that machine learners may make wrong predictions. Cyber-physical systems encompassing machine learners may use those predictions to take erroneous actions. The research work aims to explore different applications where machine learning is exploited, identifying possible scenarios where the machine learner makes mistake, and the likely consequences at system level. Each scenarios, including its physical environment, is detailed to identify the conditions under which a system failure occurs, and, as a direct consequence, the operating condition of the machine learner under which the system is expected to behave properly. From these considerations, general rules to define target requirements of a machine learner operating in a determined physical environment are finally raised. To restrict the scope of the research and facilitate the student, the analysis will focus on the autonomous driving domain.