Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
Probabilistic Circuits. Causal Representation Learning. Causal Explanations. Causality and Large Language Models. Counterfactual learning. Job requirements Master’s degree in Computer Science, Mathematics, or a
-
motivated and skilled PhD candidate to work in the area of probabilistic machine learning. The position is fully funded for a term of four years. The research direction will be determined together
-
computing (SC)? Are you fascinated by the emerging field of machine learning (ML)? Are you our next PhD-candidate in scientific machine learning or SciML (combining SC and ML)? Are you eager to work on the
-
formalizes the synergy between physics, information theory, and machine learning, particularly focusing on computing with Oscillatory Neural Networks (ONNs). Project The project aims to formalize the synergies
-
models. Trustworthy AI is a major topic in machine learning, which is illustrated by the increasing number of initiatives to enforce AI systems to be more trustworthy. Although machine learning models
-
Irène Curie Fellowship No Department(s) Applied Physics and Science Education Reference number V34.7526 Job description Are you inspired by combining physics-based models with machine learning
-
) collaborating with machine learning experts (a second PhD student in USA) integrating personalised AI into meaningful assistive technology interventions (4) critically evaluating methodological approaches in
-
for compositional methods that focus on extra-functional aspects, such as performance, resource budgets, security, or energy, pursuing hybrid, knowledge-driven and machine-learning-based, approaches. As a newly
-
of machine capabilities. However, learning these models requires direct access to vast data repositories, which poses significant privacy and logistical challenges, especially in the health sensing domain
-
humanities on AI safety and explainability. Reviewing technical literature on machine learning and explainable AI. Developing a normative evaluation framework for the use of explainable AI in machine vision