Skip to main content

Web Content Display Web Content Display

Web Content Display Web Content Display

  

Web Content Display Web Content Display

What's new?
Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
Michał Kuk, Szymon Bobek and Grzegorz J. Nalepa recently presented a paper about explainable AI on the prestigious International Conference on Computational Science 2022 (ICCS 2022). Below you can find a link to the paper.
12.07.2022
Read More o Comparing Explanations from Glass-Box and Black-Box Machine-Learning Models
Anomaly Detection in Asset Degradation Process Using Variational Autoencoder and Explanations
New publication authored by the members of JAHCAI - Szymon Bobek and Grzegorz J. Nalepa, along with Jakub Jakubowski and Przemysław Stanisz.
08.04.2022
Read More o Anomaly Detection in Asset Degradation Process Using Variational Autoencoder and Explanations

Web Content Display Web Content Display

Projects

JAHCAI consists of 5 working groups, coordinated by prof. Grzegorz Nalepa:

Data Science in Games (DSG): research dedicated to data science for Human-Centered AI, including video games and new VR, AR and MR applications, as well as computational narrative, which are connected with development of emotive interfaces HCI/BCI, based on multimodal fusion of physiological signals., voice, mimics and contextual data. Group will cooperate with the HMI Group in the subjects of new interfaces and affective processing. 

Human-Machine Interaction (HMI): research on HCI/BCI emotive interfaces, along with research on EEG analysis in order to assess of cognitive processes and research on adaptation and personalization of a UI - emotive and knowledge-based (recomendation systems, AfC and ambient assisted living. Group activities will be connected with domain of DSG (game development enhancement and creation games for experiments) and KnE (creation of models and data fusions) groups.

Knowledge and Explanation (KNE): the KNE group works on developing eXplainable AI methods merging symbolic and ML models (in Industry 4.0, 5.0 and AfC), enhancement of decision supporting systems and increasing transparency and accountability. Research conducted by the group also relates to EEG analysis of cognitive processes assessment and representation of legal knowledge.

Law, AI and Responsibility (LAR):  The group focuses on researching methods of representing the legal system, legal knowledge and legal reasonings, as well as implications of this research for XAI and assigning responsibility of AI systems (RAI). Works of the group will also refer to literature on development of XAI and decision supporting systems - their transparency and accountability. LAR work refers to the research of KNE.

Machine Perception (MPR): MPR's research focuses on AI-based systems of senses substition for blind persons, aiming to create the systems of visual information substition which, thanks to AI, analyzes the image taking into account the transformations made by the visual system of the sighted person. Moreover, the group is working on development of innovative multi-modal anayses of biomedic and psychological data, as well as development of automatic methods of human medical and ergonomic status assessment. Another area of research is modelling and analysis of the perception within artificial systems using AI methods.