Methods for Assessing and Evaluating Ethics Learning in Engineering Education

Authors: Professor Sarah Hitt SFHEA (NMITE) and Professor Raffaella Ocone OBE FREng FRSE (Heriot Watt University)

Keywords: Engineering education, assessment methods and tools, ethics assessment and evaluation, AHEP, ABET, ethics learning, assessment aims and outcomes.

Who is this article for?: This article should be read by educators at all levels in higher education who wish to integrate ethics into the engineering and design curriculum or module design. It will also help prepare students with the integrated skill sets that employers are looking for.

Premise:

Educators who integrate ethics into their activities and modules may be unsure how to assess student learning in this area. Yet assessment of ethics learning is not only crucial for evaluating learning, but also for identifying ways to improve the teaching of ethics within engineering education. This is becoming increasingly important as accreditation bodies such as the AHEP (UK) and ABET (US) have revised standards to emphasise the context of engineering practice – of which ethics is a key component. Professional and industrial organisations like the Royal Academy of Engineering and the IET are prioritising ethical principles within their activities too.

The challenge of assessment:

The challenges of assessing ethics learning can seem difficult to overcome. Many of these challenges are summarised by Davis and Feinerman (2012) as “practical limits on assessment”. These include demands on time, pressure from other instructors or administrators, difficulty in connecting assessment of ethics with assessment of technical content, and instructors’ unfamiliarity or lack of confidence in ethics teaching.

Furthermore, as Keefer et al. (2014, p.250-251) point out, “realistic ethical problems are what cognitive scientists refer to as ‘ill-structured problems’, because there is no clearly specified goal, usually incomplete information, and multiple possible solution paths . . . good student responses can lead in quite different directions, providing emphases on a diversity of values and issues that are difficult to predict”.

However, scholars of engineering ethics have been studying assessment methods and practices for decades, and have shown ways of overcoming these challenges. Informed by other areas of practical and professional ethics, including business or medical ethics, their work has tried to formalise evaluation and measure students’ learning after ethical interventions in the curriculum. Whether these interventions occur in the context of a single course or module on engineering ethics, as part of a defined design project, or integrated within technical lessons, scholars agree that ethics learning can, and should, be assessed as a best practice in engineering education (Benya, 2012).

Assessment aims and methods:

Most educational institutions promote a variety of assessment methods as good educational practice. As such, both quantitative and qualitative assessment methods can be used in ethics education; many of these are described in Watts et al.’s (2017) systematic review and analysis of best practices. These include: pre- and post-tests, experimental and control groups, interviews to elicit descriptive data, or written essays from which themes can be identified and extracted.

No matter which method is chosen, the key to assessing student progress in ethics learning is for the educator to align the content that is taught, with the outcomes that are desired (Bairaktarova and Woodcock, 2015). These outcomes can be informed by other module or programme learning outcomes and accreditation standards.

A good practice is to use outcomes informed by scholars in moral development and teaching ethics, who have described ways to identify and then measure defined elements of ethics learning. For example, the Engineering ethics curriculum map developed by the Royal Academy of Engineering identifies pedagogical focus at different learning levels with corresponding outcomes and content.

In ethics education more generally, Davis and Feinerman (2012) describe these learning aims which can be applied to engineering ethics:

  1. Improve students’ sensitivity (the awareness and recognition of ethical dilemmas).
  2. Increase students’ knowledge (ethics resources such as codes, standards, theories, and/or decision-making tools).
  3. Enhance students’ judgement (the analysis and reasoning required to make and justify ethical choices).
  4. Reinforce students’ commitment (the motivation to act based on ethics learning).

These aims correspond to a taxonomy of moral development such as that described by James Rest (1994) which increases in complexity at different learning levels. For this reason, the Royal Academy of Engineering/Engineering Professors’ Council’s Engineering ethics case studies are designated as Beginner, Intermediate, and Advanced, where:

  • Beginner cases focus on ethical Awareness, Sensitivity, and Imagination
  • Intermediate cases focus on ethical Analysis, Reasoning, and Judgement
  • Advanced cases focus on ethical Motivation, Action, and Commitment.

Developing assessment tools in engineering ethics:

Educators may use these ethics learning aims / outcomes as guidance for developing assessments. For example, in an intermediate case that focuses on making a decision about an ethical dilemma, students might be assessed on their ability to:

  • identify stakeholders affected by the dilemma and describe their perspectives
  • define the problem and explain why it is an ethical dilemma
  • identify possible courses of action in response to the dilemma
  • propose a course of action that is justified by drawing on codes or standards.

After outcomes are identified, educators can design assessment tools. In the case described above, multiple choice questions would ask students to identify stakeholders, choose among options that correctly define the problem, or identify potential courses of action.

A matching question could link stakeholders and their perspectives. Students would be asked to explain the dilemma and propose a course of action and a narrative could be evaluated against a rubric that scores students’ proficiency on a scale of Less Proficient to Expert in categories such as:

  • the ability to anticipate ethical concerns of stakeholders
  • the ability to recognise competing ethical demands
  • the ability to refer to resources that support ethical action.

These tools could be used in formative assessments, where students are given checklists, rubrics, or scoring guides to evaluate their learning as it is happening and prior to the completion of final exams or projects. Keefer et al. (2014) show formative assessment to be effective in engineering ethics learning situations not only because of its benefit to students, but also in its ability to reveal gaps in instruction that can be used to improve teaching.

Sindelar et al. (2003) describe the use of a summative assessment tool where students provided written responses to questions about two engineering ethics scenarios and were scored using a rubric designed to evaluate their response to an ethical dilemma. Both of these examples were also used in both pre- and post-test scenarios. These could also be useful in measuring the effectiveness of ethics instruction.

Finally, Davis and Feinerman (2012) demonstrate how slight adjustments to technical questions can elicit responses that also reveal students’ ethics learning. This can be done by using the example of a question about the technical capabilities of a micro-fluidic device and its advantages or disadvantages to society.

Conclusion:

We should be encouraged that, as Watts et al. (2017, p.225-226) also demonstrate, “multiple meta-analyses examining the effectiveness of ethics courses in the sciences and business” show that ethics instruction does improve students’ ability to make ethical decisions, and that ethics education has “improved significantly in the last decade”. With that in mind, educators should feel confident that they can identify what aspect of ethics learning needs to be assessed, and then measure it with an appropriately designed assessment tool.

References:

Bairaktarova, D. and Woodcock, A. (2015). ‘Engineering ethics education: Aligning practice and outcomes’, IEEE Communications Magazine, 53(11), pp.18-22.

Benya, F.F., Fletcher, C.H. and Hollander, R.D., (2013) ‘Practical Guidance on Science and Engineering Ethics Education for Instructors and Administrators: Papers and Summary from a Workshop December 12, 2012’, Washington, DC: National Academies Press.

Davis, M. and A. Feinerman. (2012). ‘Assessing graduate student progress in engineering ethics’, Science and Engineering Ethics, 18(2), pp. 351-367.

Keefer, M.W., Wilson, S.E., Dankowicz, H. and Loui, M.C., (2014) ‘The importance of formative assessment in science and engineering ethics education: Some evidence and practical advice’, Science and Engineering Ethics, 20(1), pp. 249-260.

Rest, J. R., (1994) ‘Background: Theory and research’, in Rest, J. and Narvaez, D. (eds.), Moral Development in the Professions: Psychology and Applied Ethics. Mahwah, NJ: Lawrence Erlbaum Associates, pp. 1-26.

Sindelar, M., Shuman, L., Besterfield-Sacre, M., Miller, R., Mitcham, C., Olds, B., Pinkus, R. and Wolfe, H., (2003) ‘Assessing engineering students’ abilities to resolve ethical dilemmas’, Paper presented at the ASEE/IEEE Frontiers in Education Conference, Boulder, CO, 5-8 November 2003.

Watts, L.L., Todd, E.M., Mulhearn, T.J., Medeiros, K.E., Mumford, M.D. and Connelly, S., (2017) ‘Qualitative evaluation methods in ethics education: A systematic review and analysis of best practices’, Accountability in Research, 24(4), pp. 225-242.

Additional resources:


This work is licensed under a Creative Commons Attribution-ShareAlike 2.0 Generic License.