The contemporary AI technology has sparked many ethical issues, which can be broadly divided into two types: (1) Ethical issues caused by the use of AI technology. (2) Ethical issues caused by the development of AI technology. For the first type of issue, in addition to considering whether specific AI technology is permitted to replace humans, another important issue is the allocation of responsibility when AI technology causes harm: Who should be responsible when AI causes harm? As the degree of automation of AI increases, the role that humans play in the entire event will become weaker. This may potentially create a responsibility gap: harm has occurred, but there is no appropriate entity to take responsibility. In addition, if AI really successfully develops consciousness, how should we assess the moral status of AI and how should we interact with AI? Then, the second type of issue caused by AI technology concerns which ethical guidelines need to be followed in AI development, (for example, issues of surveillance, trust, data neutrality, data collection and personal privacy protection, transparency and explainability of AI decision-making and judgment, etc.) and how these ethical guidelines should be implemented. Of course, these ethical considerations will also appear in the use of AI technology. This course will invite students to explore the aforementioned various related issues.