To provide an idea of the type of projects and activities Mooncake AI can help with, we list a few examples from the past. The examples listed here were carried out while Marieke Peeters was still employed at TNO and Xomnia. (Mooncake AI was founded in early 2022). As soon as new projects gather up as part of Mooncake AI, they will be listed here as well.
Ethical reviewer (Analytics Translator @ Xomnia)
The client was a Dutch governmental inspection organization. They inspect vehicles to ensure they are following the law.
In recent years, the client developed a number of prediction models that supported their inspection practice. Internally, the question arose whether, and how, these models can be ethically evaluated. To this end, use was made of the checklist recently drawn up by the High Level Expert Group of the European Commission.
I was asked to contribute as an external reviewer to the review process. As part of the review process, I contributed by suggesting future steps to bring the models and the working method within the institute to an (ethically) higher level.
NLAIC/HvA Teach the Teacher project (Analytics translator @ Xomnia)
Through the Netherlands AI Coalition, five Universities of Applied Sciences (UAS - from Amsterdam, Rotterdam, The Hague, Breda, and Eindhoven) joined forces to upskill their teachers with knowledge about AI-driven developments within their sector.
The goal is for every teacher to understand the opportunities and risks involved in applying AI within their sector. On top of this, the training program will enable them to provide examples. The ultimate goal is for the teachers to work this knowledge into their own educational programs for students. In time, there will be additional specialized courses for teachers who want to learn more, e.g. work with AI technology themselves, or even develop new tools and technology for their sector.
Each UAS is initially developing courses for a specific sector and faculty (e.g. Amsterdam: Business and Economics; Rotterdam: Health; The Hague: Justice and Public Services, etc.). This means that there will be various "generic" modules, that can be reused across faculties and educational programs. In addition, various sector-specific modules are developed that tie directly into the specifics of that application domain.
I was the project leader for the overarching project of the NLAIC consortium, as well as the particular part of the project taking place at the HvA in Amsterdam.
As part of the job, I coordinated the collaboration between the UAS's. Furthermore, I developed part of the generic courses reused across the UAS's, and coordinated the development of the courses specific to the faculty selected for Amsterdam to start with (Business and Economics faculty).
Unfortunately, due to getting accepted for the Antler startup generator program, I had to carry over my tasks for this project to one of my Xomnia colleagues, Lisanne Rijnveld.
Science Office Knowledge Plan (Senior research scientist @ TNO)
During my appointment at TNO, I had a secondary role of 0.5 FTE for 8 months, in addition to my role as a research scientist. In this role I worked closely with the Director of Science of our unit. I was responsible, among other things, for the roll-out of the “Knowledge Plan”: all 16 departments were encouraged to write a long-term vision document, in which the knowledge strategy of the department was recorded in the form of an IST / SOLL analysis.
After the first edition, I carried out an extensive evaluation with the departments involved and gave advice to the unit management about the continuation of the knowledge plan.
A biennial update is now taking place, including new developments in the outside world (both in the market and at universities and other knowledge institutions), and what this means for the knowledge strategies of the departments in the unit.
Human-AI Teaming & Responsible AI (Senior Research Scientist AI @ TNO)
The Human-AI Teaming research line at TNO investigates how people and AI might work together in future settings.
We investigated a range of possibilities through proof of concept demonstrators. Examples of such concepts include:
Proactive communication: Can AI learn what information is essential to share with team members to improve the overall team performance?
Management by exception: Can AI proactively determine when it requires assistance from a human team member, and gradually include the human into the loop, to ensure that the human is up to speed by the time the sh*t hits the fan?
The SAIL framework: Can we add a modular layer of social AI functionalities as part of a larger multi-agent system, consisting of autonomous vehicles, intelligent information processing agents, and humans, collaborating in a large socio-technical structure? Does this layer improve team performance measures, such as information sharing, situation awareness, accuracy in handling events and threats, and so on?
MATRX: The field of human-agent teaming (HAT) research aims to improve the collaboration between humans and systems. Small tasks are often designed to explore and evaluate research in human experiments. Two examples are Block Worlds for Teams (BW4T) and Space Fortress. However, these tasks are often outdated, unreliable or uniquely developed to explore one dimension of HAT research. To remedy the lack of a team task library for HAT research, we developed the Human-Agent Teaming Rapid Experimentation software package, or MATRX for short. MATRX’s main purpose is to provide a suite of team tasks with one or multiple properties important to HAT research (e.g. varying team size, decision speed or inter-team dependencies). In addition to these premade tasks, MATRX facilitates the rapid development of new team tasks
Trust calibration: Can we enrich AI systems with the ability to make an estimation of trust between team members and act in ways to establish calibrated trust between the team members? Can we model longitudinal trust dynamics to support this process?
Work agreements: How can we ensure that an autonomous agent remains within acceptable boundaries when acting in a confined/predefined environment?
Explainable AI: What is needed for a human team member to understand, e.g., what an intelligent system is doing, what it is proposing to do, and why, or how the human is able to intervene when the intelligent system is working with faulty assumptions and/or calculations?
De Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human–robot teams. International journal of social robotics, 12(2), 459-478.
Peeters, M. M., van Diggelen, J., Van Den Bosch, K., Bronkhorst, A., Neerincx, M. A., Schraagen, J. M., & Raaijmakers, S. (2021). Hybrid collective intelligence in a human–AI society. AI & SOCIETY, 36(1), 217-238.
Peeters, M. M., Harbers, M., & Neerincx, M. A. (2016). Designing a personal music assistant that enhances the social, cognitive, and affective experiences of people with dementia. Computers in Human Behavior, 63, 727-737.
Van Diggelen, J., Neerincx, M., Peeters, M., & Schraagen, J. M. (2018). Developing effective and resilient human-agent teamwork using team design patterns. IEEE intelligent systems, 34(2), 15-24.
Mioch, Tina, Marieke MM Peeters, and Mark A. Neerincx. Improving adaptive human-robot cooperation through work agreements. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 2018.
Vecht, B. V. D., Diggelen, J. V., Peeters, M., Barnhoorn, J., & Waa, J. V. D. (2018, June). SAIL: a social artificial intelligence layer for human-machine teaming. In International conference on practical applications of agents and multi-agent systems (pp. 262-274). Springer, Cham.
Peeters, M. M. M., van den Bosch, K., Meyer, J. J., & Neerincx, M. A. (2012). Situated cognitive engineering: the requirements and design of automatically directed scenario-based training. In Achi 2012, the fifth international conference on advances in computer-human interactions (pp. 266-272). XPS: Xpert Publishing Services.