There is currently a lively and important discussion in society about the dangers of artificial intelligence. While there is much focus on general and human-like artificial intelligence in this debate, it can be argued that an overlooked but highly problematic aspect of computers is that they function in a fundamentally different way than the human brain. This means that they come to conclusions in a different manner than humans, and that it consequently will be difficult for a human to overview how they will be affected by actions of an interacting agent. Moreover, for humans and artificial agents to be able to solve tasks together, e.g. performing assembly tasks in industrial settings, or making medical diagnoses in a semi-automated manner with a human doctor in the loop, artificial agents need to be able to interpret the communicative behavior of an interacting human, and also convey behavior and reasoning that is interpretable to a human. I will in my talk describe how my research group work towards developing Explainable AI - artificial intelligence that can interpret humans, and be intepreted by humans.