Objectives: To equip learners with the skills to successfully navigate digital and traditional recruitment processes for engineering roles. This includes demonstrating EDI, technical, and employability skills using the STAR framework; tailoring CVs for AI and Applicant Tracking Systems (ATS); and preparing for aptitude and abstract reasoning tests through targeted practice to enhance problem-solving and analytical abilities.
Introduction: Large national and international employers use digital application processes to recruit graduates. These digital applications aim to capture personal details, education, and work experience. Reflect on your experiences to demonstrate your EDI, employability, and technical skills applied using the STAR (Situation, Technique, Action, and Result) framework. Smaller and medium enterprises typically seek cover letters and CVs.
Topic: Navigating digital recruitment in engineering: CVs, AI, and aptitude tests.
Keywords: Equity Diversity and Inclusion; Employability and skills; Problem solving; Assessment criteria or methods and tools; CVs and cover letters; Digitalisation; Artificial intelligence; Information and Digital literacy; Communication; Technical integration; Writing skills; Inclusive or Responsible design; Neurodiversity; Curriculum or Course; Computer science; Computing; Engineering professionals; Professional development; Recruitment; Digital engineering tools; Business or trade or industry; Workplace culture
Master the art of applying for engineering computing jobs
In the video below, Professor Anne Nortcliffe explains how to develop expertise in securing engineering computing positions by demonstrating technical proficiency and employability skills through well-supported, evidence-based responses.
Video summary:
Master the art of applying for engineering computing jobs by showcasing both technical and employability skills through evidence-based responses.
Key insights:
⚙️AI in hiring: Understanding that many companies use AI for initial screenings emphasizes the need for clear, evidence-based answers in applications.
✏️Individual contributions: Highlighting personal achievements rather than team efforts showcases leadership and initiative, key traits employers seek.
💡Interpersonal skills: Employers value teamwork and leadership; demonstrating how you’ve influenced others highlights your potential as a valuable team member.
Diversity matters: Bringing unique social perspectives into projects can lead to more inclusive solutions, making your application stand out.
⭐STAR methodology: Using the STAR method helps structure your experiences into compelling narratives, making it easier for employers to assess your qualifications.
🗒️Tailored applications: Customising your CV and cover letter for each job application reflects your genuine interest and ensures relevance to the employer’s needs.
📚Professional etiquette: Ending your application with gratitude and a clear call to action maintains professionalism and shows your enthusiasm for the role.
AI and Applications
To navigate digital recruitment, it’s crucial to understand AI’s role in candidate screening. Tailor your CV to pass AI and Applicant Tracking Systems (ATS) using resources that provide insights into keywords, formatting, and strategies. This enhances your visibility and competitiveness in the digital recruitment process.
Please note that after clicking these links, you will need to create a free account on the external website to access the materials.
CV and Covering Letter
CV templates to support students and graduates to stand out and highlight their engineering and technology capabilities, especially when applying to Small and Medium Enterprises (SMEs) that do not use AI recruitment tools.
For applications to large corporations that use AI recruitment tools, it is recommended:
Use a plain text CV.
Include a web link to your LinkedIn profile or personal portfolio showcasing your engineering and technology capabilities.
Digitally watermark all items in your portfolio to protect your intellectual property (IP).
Aptitude and Abstract Reasoning Test
If your digital application is successful you will be typically invited to complete an aptitude and abstract reasoning tests to evaluate candidates. To excel, practice brain training exercises and brain teasers to enhance problem-solving, critical thinking, and analytical skills. Regular practice with similar questions boosts confidence and performance, improving your chances of passing these tests and standing out in the recruitment process.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Please note: Discussions around discrimination, prejudice and bias are highly complex and part of a much wider national and international debate, including contested histories. As such, we have limited the scope of our resources to educating and supporting students.
The resources that the EPC and its partners are producing in this area will continue to expand and, if you feel there is an issue that is currently underrepresented in our content, we would be delighted to work with you to create more. Please get in touch.
Objectives: This activity aims to equip students with strategies to thrive in video interviews.
Introduction: Our mission is to empower students with tips to excel in video interviews. This interactive challenge provides tailored advice to leverage your strengths and navigate digital recruitment challenges. Get expert guidance for in-person, video, and telephone interviews with recruiters. Learn about optimal lighting, assessment centres, and holistic interview practices.
Topic: Mastering video and virtual interview skills with inclusive preparation strategies.
Keywords: Neurodiversity; Equity Diversity and Inclusion; Interviews; Recruitment; CVs and cover letters; Digitalisation; Communication; Employability and skills; Accessibility; Professional development; Professional conduct; Digital engineering tools; Artificial intelligence; Virtual Learning Environment; Personal or professional reputation; Student support; Technology; Assessment criteria or methods and tools; Bias.
How to optimise your interview setup and presence
Watch our featured video from Wenite (below) for expert tips on optimising your interview setup and presence.
Video summary:
Being well-prepared for job interviews is essential for making strong impressions, boosting confidence, and gaining a competitive edge.
Highlights:
🎯Importance of preparation: Crucial for first impressions and confidence.
👔In-person tips: Dress appropriately, mind body language, and plan travel.
💻Virtual interview prep: Ensure tech works, choose a quiet space, and test the platform.
📞Phone interview strategies: Use notes wisely, maintain vocal clarity, and avoid distractions.
🌟STAR technique: A framework for answering behavioural questions effectively.
🏢Research the company: Align your values and goals with the organisation to show genuine interest.
❓Prepare questions: Have smart, relevant questions ready for the interviewer.
Key insights :
🔍First impressions matter: A strong initial impression can set the tone for the entire interview, making preparation vital.
💪Confidence through practice: Thorough preparation helps articulate thoughts clearly, enhancing confidence during interviews.
🏆Competitive edge: Detailed preparation allows candidates to showcase unique skills and experiences, differentiating them from others.
🎥Adapt to formats: Each interview type requires a tailored approach, from dressing well for in-person to testing tech for virtual formats.
📖Utilise the STAR technique: This adaptable framework helps structure responses to behavioural questions, ensuring clarity and relevance.
🌐Company research is critical: Understanding the company’s values and strategies can help align your responses and demonstrate genuine interest.
❓Engaging questions matter: Thoughtful questions reflect your interest in the role and provide insights into the company culture and expectations.
Lights, camera, action!
A profile picture or video interview is often your first impression on a potential employer. Ensure you convey professionalism, approachability, and confidence, especially with proper lighting for accurate representation. AI tools can optimise your appearance by adjusting lighting and camera settings for accurate colour representation, helping you present your best self.
When preparing for a job interview, ensure the process is accessible to all candidates by requesting reasonable adjustments, like receiving interview questions beforehand. Approach employers with confidence and professionalism, clearly explaining how these adjustments will help you perform at your best. Proactively advocating for such adjustments fosters a more inclusive environment for all applicants.
The following is a mapping of neurodiversity traits to their corresponding strengths mapped to UK Engineering Council Specification of professional engineering skills. This can aid in job applications and interview preparation, as evidence of applied neurodiversity strengths can demonstrate engineering and employability skills: Neurodiversity Strengths Mapping
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Please note: Discussions around discrimination, prejudice and bias are highly complex and part of a much wider national and international debate, including contested histories. As such, we have limited the scope of our resources to educating and supporting students.
The resources that the EPC and its partners are producing in this area will continue to expand and, if you feel there is an issue that is currently underrepresented in our content, we would be delighted to work with you to create more. Please get in touch.
Objectives: This activity is our guide to navigating assessment centres, offering tips and strategies tailored to empower underrepresented groups and help you be prepared, authentic self, stand out and succeed.
Introduction: Assessment centres have been a key part of graduate recruitment since the 1950s, originally developed to evaluate leadership potential in military officers. Today, they are widely used by employers to assess candidates through group tasks, interviews, and individual exercises. This activity serves as a practical guide to help you navigate assessment centres with confidence. With a focus on empowering underrepresented groups, it provides tips and strategies to help you prepare effectively, present your authentic self, and stand out in a competitive selection process.
Topic: Standing out with confidence at assessment centres: a guide to preparation, authenticity, and success.
Keywords: Problem solving; Employability and skills; Communication; Leadership or management; Collaboration; Digitalisation; Professional development; Writing Skills; Equity, Diversity and Inclusion; Neurodiversity; Inclusive or Responsible design; Recruitment; Business or trade or industry; Workplace culture; Information and Digital literacy; Artificial Intelligence.
An immersive experience
Getting startedWhat to expect An employer’s guide What are assessment centre activities?
Click on each accordion tab to explore videos that guide you through navigating assessment centres, offering tips and strategies designed to empower underrepresented groups and help you prepare, be your authentic self, stand out, and succeed.
Video summary:
This video was produced by The Careers Chat, a platform associated with Warwick University, provides an overview of assessment centres used by graduate recruiters. It discusses various tasks designed to evaluate candidates’ skills in action, offering insights into the selection process and tips for preparation.
Key insights:
🌟 Always be mindful that you’re being assessed – from the moment you arrive until you leave. Maintain a professional and approachable demeanor to leave a lasting positive impression.
🤝 View fellow candidates as collaborators, not competitors. Respect their perspectives and engage in teamwork; remember, it’s possible that everyone could be offered a role.
💼 Keep in mind that the tasks are tailored to the role you’re applying for. Be authentic, and the skills you’ve already highlighted in your application will naturally stand out.
Video summary:
Assessment centres are crucial for graduate recruitment, involving various tasks to evaluate candidates’ skills through collaborative activities.
Key insights:
🎓 Real-time evaluation: Assessment centres provide an opportunity for recruiters to observe candidates in action; skills, interpersonal dynamics and teamwork.
📅 Duration and format flexibility: Be prepared and mentally ready for either a half-day or full-day assessment face to face or online.
📝 Diverse assessment tasks: Wide range of tasks, from essays to presentations, means candidates should practice and be adaptable to showcase different skills.
🤝 Collaboration over competition: Viewing fellow candidates as collaborators rather than competitors can foster a supportive atmosphere, better outcomes for everyone.
🌈 Authenticity matters: Presenting genuine skills and authentic experiences rather than trying to fit a mould can make candidates stand out and connect with recruiters.
🚪 Professionalism is key: From the moment you arrive until you leave, maintaining a professional demeanour leaves a lasting impression, and suitability for the role.
💡 Preparation is essential: Familiarising oneself with the specific tasks related to the job application can boost confidence and performance, and draw upon relevant skills.
Video summary: An assessment centre evaluates candidates through various exercises to assess teamwork, problem-solving, and fit within the company culture.
Key insights:
🔍 Assessment centres are designed to simulate real work environments, helping employers see how candidates fit into team dynamics and your ability to collaborate.
🧠 Psychometric tests may be retaken during the assessment, so candidates should be prepared to demonstrate their logical reasoning and numerical skills in person.
🗣️ Group exercises focus on problem-solving as a team, the process is more important than the outcome, opportunity to show your communication and leadership skills.
🎤 Presentations, whether in groups or individually, evaluate public speaking and the ability to synthesize complex information into clear solutions.
🎭 Role-play exercises test candidates’ client-handling skills and ability to provide solutions under pressure, highlighting their problem-solving approach.
🤝 Lunch and breaks are part of assessment, are an opportunity to network, and demonstrate your informal communication skills that could influence your success
📊 You need to demonstrate understanding and applying the company’s core values and meeting their desired competencies effectively throughout the process.
Resources
Access our University Career Services Library to connect with your university’s career services and take advantage of employability training opportunities, such as mock assessment centre sessions.
Thornton et al (2019) research concluded to prevent gender and race bias at assessment centres, employers must implement rigorous development and practices to counter both conscious and unconscious biases.
The video offers tailored guidance specifically for international students.
Acing virtual assessment centres: future you webinar:
As part of their Future You webinar series, Prospects hosted a session titled Acing Virtual Assessment Centres on Tuesday, 20th April 2021. The webinar offers valuable insights, practical tips, and expert guidance to help students confidently navigate virtual assessment centres. Watch the video below to gain useful strategies and boost your preparation. Aldi, Arcadis and Police Now Recruiters advice for preparing for Virtual Assessment centres.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Please note: Discussions around discrimination, prejudice and bias are highly complex and part of a much wider national and international debate, including contested histories. As such, we have limited the scope of our resources to educating and supporting students.
The resources that the EPC and its partners are producing in this area will continue to expand and, if you feel there is an issue that is currently underrepresented in our content, we would be delighted to work with you to create more. Please get in touch.
We’re excited to share with you that we are starting work on a Complex Systems Toolkit, aimed at supporting educators in their teaching of the subject. Toolkit development will start in early 2025. The Complex Systems Toolkit is supported by Quanser. Read on to learn more and find out how you can get involved.
WHY is the EPC developing a Complex Systems Toolkit?
Complex systems shape our lives and day-to-day realities more than most people realise. At the intersection of computing, robotics, and engineering, ever more technology is dependent on complex systems, from AI to biomedical devices to infrastructure.
Understanding both complexity and systems is critical to today’s engineering graduates, especially as the UK seeks to position itself as a leader in areas like advanced manufacturing and autonomous systems.
Engineers increasingly work in environments where they are required to connect different disciplines, perspectives, and skills, to understand and navigate sociotechnical systems, and to communicate complexity to diverse audiences.
Employers today seek graduates who understand not just interdisciplinary engineering work, can work with teams, and understand complexity from different fields and specialisations, but also who can work with non-engineers on products and projects and translate that complexity effectively.
Systems thinking competency is seen as critical to education for sustainable development, and when integrated holistically, complex systems in engineering teaching can align with national and international initiatives that promote social and environmental responsibility.
Accreditation frameworks increasingly refer to complex problems and systems thinking in outcomes for engineering programmes.
Learning approaches for integrating complex systems knowledge, skills, and mindsets in engineering supports educators in their own professional development, since many may have not learned about this topic that they are now expected to teach.
WHAT is a Complex Systems Toolkit?
The Complex Systems Toolkit will be a suite of teaching resources, which may include a scaffolded framework of learning objectives, lesson plans, guidance, case studies, project ideas, and assessment models. These are intended to help educators integrate complex systems concepts into any engineering module or course.
The Toolkit’s ready-to-use classroom resources will be suitable for those who are new to teaching complex systems, as well as those who are more experienced.
Teaching materials will focus on the development of relevant knowledge, skills, and mindsets around complex systems and contain a variety of suggestions for implementation rooted in educational best practice.
Toolkit resources will help educators to understand, plan for, and implement complex systems learning across engineering curricula and demonstrate alignment with AHEP criteria and / or graduate attributes.
Guidance articles will explain key topics in complex systems education, highlighting existing resources and solutions and promoting engagement with a network of academic and industry experts.
HOW will the Toolkit be developed?
The Toolkit materials will be created and developed by diverse contributors from academia and industry, representing a variety of fields and coming from multiple continents.
The resources will be presented so that they can be used in many different settings such as online and hybrid teaching, lecture sessions, and problem-based learning scenarios.
The Toolkit will be a community-owned project, and anyone can suggest or submit a new resource or get involved.
The Toolkit will be developed by the Engineering Professors’ Council and is supported by Quanser.
WHO is involved in Toolkit development?
The development of the Toolkit will be managed by a Working Group of subject experts from academia and industry, put together by the EPC and Quanser.
Keywords: SDGs; AHEP; Sustainability; Design; Life cycle; Local community; Environment; Circular economy; Recycling or recycled materials; Student support; Higher education; Learning outcomes.
Sustainability competency: Systems thinking; Anticipatory; Critical thinking.UNESCO has developed eight key competencies for sustainability that are aimed at learners of all ages worldwide. Many versions of these exist, as are linked here*. In the UK, these have been adapted within higher education by AdvanceHE and the QAA with appropriate learning outcomes. The full list of competencies and learning outcome alignment can be found in the Education for Sustainable Development Guidance*. *Click the pink ''Sustainability competency'' text to learn more.
AHEP mapping: This resource addresses two of the themes from the UK’s Accreditation of Higher Education Programmes fourth edition (AHEP4): The Engineer and Society (acknowledging that engineering activity can have a significant societal impact) and Engineering Practice (the practical application of engineering concepts, tools and professional skills). To map this resource to AHEP outcomes specific to a programme under these themes, access AHEP 4 here and navigate to pages 30-31 and 35-37.
Related SDGs: SDG 9 (Industry, innovation, and infrastructure); SDG 12 (Responsible consumption and production).
Reimagined Degree Map Intervention: Adapt and repurpose learning outcomes; More real-world complexity.The Reimagined Degree Map is a guide to help engineering departments navigate the decisions that are urgently required to ensure degrees prepare students for 21st century challenges. Click the pink ''Reimagined Degree Map Intervention'' text to learn more.
Who is this article for? This article is for educators working at all levels of higher education who wish to integrate Sustainability into their robotics engineering and design curriculum or module design. It is also for students and professionals who want to seek practical guidance on how to integrate Sustainability considerations into their robotics engineering.
Part of the strategy to ensure that engineers incorporate sustainability into their solution development is to ensure that engineering students are educated on these topics and taught how to incorporate considerations at all stages in the engineering process (Eidenskog et al., 2022). For instance, students need not only to have a broad awareness of topics such as the SDGs, but they also need lessons on how to ensure their engineering incorporates sustainable practice. Despite the increased effort that has been demonstrated in engineering generally, there are some challenges when the sustainability paradigm needs to be integrated into robotics study programs or modules (Leifler and Dahlin, 2020). This article details one approach to incorporate considerations of the SDGs at all stages of new robot creation: including considerations prior to design, during creation and manufacturing and post-deployment.
1. During research and problem definition:
Sustainability considerations should start from the beginning of the engineering cycle for robotic systems. During this phase it is important to consider what the problem statement is for the new system, and whether the proposed solution satisfies this in a sustainable way, using Key Performance Indicators (KPIs) linked to the SDGs (United Nations, 2018), such as carbon emissions, energy efficiency and social equity (Hristov and Chirico, 2019). For instance, will the energy expended to create the robot solution be offset by the robot once it is in use? Are there long-term consequences of using a robot as a solution? It is important to begin engagement with stakeholders, such as end-users, local communities, and subject matter experts to gain insight into these types of questions and any initial concerns. Educators can provide students with opportunities to engage in the research and development of robotics technology that can solve locally relevant problems and benefit the local community. These types of research projects allow students to gain valuable research experience and explore robotics innovations through solving problems that are relatable to the students. There are some successful examples across the globe as discussed in Dias et al., 2005.
2. At design and conceptualisation:
Once it is decided that a robot works as an appropriate solution, Sustainability should be integrated into the robot system’s concept and design. Considerations can include incorporating eco-design principles that prioritise resource efficiency, waste reduction, and using low-impact materials. The design should use materials with relatively low environmental footprints, assessing their complete life cycles, including extraction, production, transportation, and disposal. Powered systems should prioritise energy-efficient designs and technologies to reduce operational energy consumption, fostering sustainability from the outset.
3.During creation and manufacturing:
The robotic system should be manufactured to prioritise methods that minimise, mitigate or offset waste, energy consumption, and emissions. Lean manufacturing practices can be used to optimise resource utilisation where possible. Engineers should be aware of the importance of considering sustainability in supply chain management to select suppliers with consideration of their sustainability practices, including ethical labour standards and environmentally responsible sourcing. Robotic systems should be designed in a way that is easy to assemble and disassemble, thus enabling robots to be easily recycled, or repurposed at the end of their life cycle, promoting circularity and resource conservation.
4. Deployment:
Many robotic systems are designed to run constantly day and night in working environments such as manufacturing plants and warehouses. Thus energy-efficient operation is crucial to ensure users operate the product or system efficiently, utilising energy-saving features to reduce operational impacts. Guidance and resources should be provided to users to encourage sustainable practices during the operational phase. System designers should also implement systems for continuous monitoring of performance and data collection to identify opportunities for improvement throughout the operational life.
5.Disposal:
Industrial robots have an average service life of 6-7 years. It is important to consider their end-of-life and plan for responsible disposal or recycling of product components. Designs should be prioritised that facilitate disassembly and recycling (Karastoyanov and Karastanev, 2018). Engineers should identify and safely manage hazardous materials to comply with regulations and prevent environmental harm. Designers can also explore options for product take-back and recycling as part of a circular economy strategy. There are various ways of achieving that. Designers can adopt modular design methodologies to enable upgrades and repairs, extending their useful life. Robot system manufacturers should be encouraged to develop strategies for refurbishing and reselling products, promoting reuse over disposal.
Conclusion:
Sustainability is not just an option but an imperative within the realm of engineering. Engineers must find solutions that not only meet technical and economic requirements but also align with environmental, social, and economic sustainability goals. As well as educating students on the broader topics and issues relating to Sustainability, there is a need for teaching considerations at different stages in the robot development lifecycle. Understanding the multifaceted connections between sustainability and engineering disciplines, as well as their impact across various stages of the engineering process, is essential for engineers to meet the challenges of the 21st century responsibly.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Professional situations: Communication; Honesty; Transparency; Informed consent; Misuse of data.
Educational level: Advanced.
Educational aim: Practising Ethical Analysis: engaging in a process by which ethical issues are defined, affected parties and consequences are identified, so that relevant moral principles can be applied to a situation in order to determine possible courses of action.
Learning and teaching notes:
This case involves Aziza, a biomedical engineer working for Neuraltrix, a hypothetical company that develops Brain-computer interfaces (BCI) for specialised applications. Aziza has always been curious about the brain and enthusiastic about using cutting-edge technologies to help people in their daily lives. Her team has designed a BCI that can measure brain activity non-invasively and, by applying machine learning algorithms, assess the job-related proficiency and expertise level of a person. She is leading the deployment of the new system in hospitals and medical schools, to be used in evaluating candidates being considered for consultant positions. In doing so, and to respond to requests to extend and use the BCI-based system in unforeseen ways, she finds herself compelled to weigh various ethical, legal and professional responsibilities.
This case study addresses two of AHEP 4’s themes: The Engineer and Society (acknowledging that engineering activity can have a significant societal impact) and Engineering Practice (the practical application of engineering concepts, tools and professional skills). To map this case study to AHEP outcomes specific to a programme under these themes, access AHEP 4 here and navigate to pages 30-31 and 35-37.
The dilemma in this case is presented in three parts. If desired, a teacher can use the Summary and Part one in isolation, but Parts two and three develop and complicate the concepts presented in the Summary and Part one to provide for additional learning. The case allows teachers the option to stop at multiple points for questions and/or activities as desired.
Learners have the opportunity to:
analyse the ethical dimensions of an engineering situation;
identify professional responsibilities of engineers in an ethical dilemma;
determine and defend a course of action in response to an ethical dilemma;
practise professional communication;
debate viable solutions to an ethical dilemma.
Teachers have the opportunity to:
highlight professional codes of ethics and their relevance to engineering situations;
address approaches to resolve interpersonal and/or professional conflict;
integrate technical content on software and/or cybersecurity;
informally evaluate students’ critical thinking and communication skills.
Brain-computer interfaces (BCIs) detect brain activity and utilise advanced signal analysis to identify features in the data that may be relevant to specific applications. These features might provide information about people’s thoughts and intentions or about their psychological traits or potential disorders, and may be interpreted for various purposes such as for medical diagnosis, for providing real-time feedback, or for interacting with external devices such as a computer. Some current non-invasive BCIs employ unobtrusive electroencephalography headsets or even optical (near-infrared) sensors to detect brain function and can be safe and convenient to use.
Evidence shows that the brains of people with specialised expertise have identifiable functional characteristics. Biomedical technology may translate this knowledge soon into BCIs that can be used for objectively assessing professional skills. Researchers already know that neural signals support features linked to levels of expertise, which may enable the assessment of job applicants or candidates for promotion or certification.
BCI technology would potentially benefit people by improving the match between people and their jobs, and allowing better and more nuanced career support. However, the BCI has access to additional information that may be sensitive or even troubling. For example, it could reveal a person’s health status (such as epilepsy or stroke), or it may suggest psychological traits ranging from unconscious racial bias to psychopathy. Someone sensitive about their privacy may be reluctant to consent to wearing a BCI.
In everyday life, we show what is on our minds through language and behaviour, which are normally under our control, and provide a buffer of privacy. BCIs with direct access to the brain and increasing capability to decode its activity may breach this buffer. Information collected by BCIs could be of interest not only to employers who will decide whether to hire and invest in a new employee, but also to health insurers, advertising agencies, or governments.
Optional STOP for questions and activities:
1. Activity: Risks of brain activity decoding – Identify the physical, ethical, and social difficulties that could result from the use of devices that have the ability to directly access the brain and decipher some of its psychological content such as thoughts, beliefs, and emotions.
2. Activity: Regulatory oversight – Investigate which organisations and regulatory bodies currently monitor and are responsible for the safe and ethical use of BCIs.
3. Activity: Technical integration – Investigate how BCIs work to translate brain activity into interpretable data.
Dilemma – Part one:
After the company, Neuraltrix, deployed their BCI and it had been in use for a year in several hospitals, its lead developer Aziza became part of the customer support team. While remaining proud and supportive of the technology, she had misgivings about some of its unexpected ramifications. She received the following requests from people and institutions for system modifications or for data sharing:
1. A hospital asked Neuraltrix for a technical modification that would allow the HR department to send data to their clinical neurophysiologists for “further analysis,” claiming that this might benefit people by potentially revealing a medical abnormality that might otherwise be missed.
2. An Artificial Intelligence research group partnering with Neuraltrix requested access to the data to improve their signal analysis algorithms.
3. A private health insurance company requested Neuraltrix provide access to the scan of someone who had applied for insurance coverage; they stated that they have a right to examine the scan just as life insurance agencies are allowed to perform health checks on potential customers.
4. An advertising agency asked Neuraltrix for access to their data to use them to fine-tune their customer behavioural prediction algorithms.
5. A government agency demanded access to the data to investigate a suspected case of “radicalisation”.
6. A prosecutor asked for access to the scan of a specific person because she had recently been the defendant in an assault case, where the prosecutor is gathering evidence of potential aggressive tendencies.
7. A defence attorney requested data because they were gathering potentially exonerating evidence, to prove that the defendant’s autonomy had been compromised by their brain states, following a line of argument known as “My brain made me do it.”
Optional STOP for questions and activities:
1. Activity: Identify legal issues – Students could research what laws or regulations apply to each case and consider various ways in which Neuraltrix could lawfully meet some of the above requests while rejecting others, and how their responses should be communicated within the company and to the requestor.
2. Activity: Identify ethical issues – Students could reflect on what might be the immediate ethical concerns related to sharing the data as requested.
3. Activity: Discussion or Reflection – Possible prompts:
Do you, as a biomedical engineer, have any duty to the people who have been scanned? Do you have more or less of a responsibility to these people or to Neuraltrix?
If you find that a fellow employee has already shared the data without telling others, how should you act? Should you worry that revealing this employee’s actions might cause distress or create distrust in the integrity of the entire system?Is there anyone else you should inform? Are there any risks you may be able to mitigate immediately?
Do you think the reasons and justifications given for the data requests listed above are legitimate?
Who owns the data collected by the BCI? Should it be protected? How, and for how long? Who should maintain it?
Dilemma – Part two:
The Neuraltrix BCI has an interface which allows users to provide informed consent before being scanned. The biomedical engineer developing the system was informed about a customer complaint which stated that the user had felt pressured to provide consent as the scan was part of a job interview. The complaint also stated that the user had not been aware of the extent of information gleaned from their brains, and that they would not have provided consent had been made aware of it.
Optional STOP for questions and activities:
1. Activity: Technical analysis – Students might try to determine if it is possible to design the BCI consent system and/or consent process to eliminate the difficulties cited in the complaint. Could the device be designed to automatically detect sensitive psychological content or allow the subject to stop the scan or retroactively erase the recording?
2. Activity: Determine the broader societal impact and the wider ethical context – Students should consider what issues are raised by the widespread availability of brain scans. This could be done in small groups or a larger classroom discussion.
Possible prompts:
On the one hand, human assessors can be subject to bias and inconsistencies and, from this point of view, algorithmic assessment leaving human assessors out of the loop may be viewed as progress. On the other hand, some “black-box” algorithms used by the BCI have been criticised for opacity, hidden biases, and the difficulty of scrutinising their decisions. If a user is dissatisfied with the BCI-enhanced assessment, should they be able to opt out of it?
If use of the Neuraltrix BCI became widespread, do you believe that humans could eventually irreversibly lose their assessment skills? Compare this with the potential loss of map-reading skills due to the easy access to Satellite Navigation systems.
Can we dispense with human opinion and make assessment processes entirely “objective”?
“Goodhart’s law,” named after the economist Charles Goodhart, states that when a measure is used as a tool, it becomes vulnerable to manipulation. Would Neuraltrix BCI create new opportunities for candidates to “game” the BCIs, and how would they do it?
Dilemma – Part three:
Neuraltrix BCI is about to launch its updated version, which features all data processing and storage moved to the cloud to facilitate interactive and mobile applications. This upgrade attracted investors and a major deal is about to be signed. The board is requesting a fast deployment from the management team and Aziza faces pressure from her managers to run final security checks and go live with the cloud version. During these checks, Aziza discovers a critical security issue which can be exploited once the BCI runs in the cloud, risking breaches in the database and algorithm. Managers believe this can be fixed after launch and request the engineer to start deployment and identify subsequent solutions to fix the security issue.
Optional STOP for questions and activities:
1. Activity: Students should consider if it is advisable for Aziza to follow requests from managers and the Neuraltrix BCI board and discuss possible consequences, or halt the new version deployment which may put at risk the new investment deal and possibly the future of the company.
2. Activity: Apply an analysis based on “Duty-Ethics” and “Rights Ethics.” This could be done in small groups (who would argue for management position and engineer position, respectively) or a larger classroom discussion. A tabulation approach with detailed pros and cons is recommended.
Should you, as a biomedical engineer, follow company rules and go ahead with manager’s requests or risk the future of the company (and possibly your job) and put deployment on hold until the security issue is fixed?
Act utilitarianism principle, as advocated by John Stuart Mill, focuses on individual actions rather than on rule, therefore, actions should be judged based on whether they resulted in the most good outcome in a certain situation. Should the Neuraltrix BCI management be guided by this principle or rather by a cost-benefit approach?
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Activity: Prompts to facilitate discussion activities.
Author: Sarah Jayne Hitt, Ph.D. SFHEA (NMITE, Edinburgh Napier University).
Overview:
There are several points in this case during which an educator can facilitate a class discussion about relevant issues. Below are prompts for discussion questions and activities that can be used. These correspond with the stopping points outlined in the case. Each prompt could take up as little or as much time as the educator wishes, depending on where they want the focus of the discussion to be. The discussion prompts for Dilemma Part three are already well developed in the case study, so this enhancement focuses on expanding the prompts in Parts one and two.
Dilemma Part one – Discussion prompts:
1. Legal Issues. Give students ten minutes to individually or in groups do some online research on GDPR and the Data Protection Act (2018). In either small groups or as a large class, discuss the following prompts. You can explain that even if a person is not an expert in the law, it is important to try to understand the legal context. Indeed, an engineer is likely to have to interpret law and policy in their work. These questions invite critical thinking and informed consideration, but they do not necessarily have “right” answers and are suggestions that can help get a conversation started.
a. Are legal policies clear about how images of living persons should be managed when they are collected by technology of this kind?
b. What aspects of these laws might an engineer designing or deploying this system need to be aware of?
c. Do you think these laws are relevant when almost everyone walking around has a digital camera connected to the internet?
d. How could engineers help address legal or policy gaps through design choices?
2. Sharing Data. Before entering into a verbal discussion, either pass out the suggested questions listed in the case study on a worksheet or project on a screen. Have students spend five or ten minutes jotting down their personal responses. To understand the complexity of the issue, students could even create a quick mind map to show how different entities (police, security company, university, research group, etc.) interact on this issue. After the students spend some time in this personal reflection, educators could ask them to pair/share—turn to the person next to them and share what they wrote down. After about five minutes of this, each pair could amalgamate with another pair, with the educator giving them the prompt to report back to the full class on where they agree or disagree about the issues and why.
3. GDPR Consent. Before discussing this case particularly, ask students to describe a situation in which they had to give GDPR consent. Did they understand what they were doing, what the implications of consent are, and why? How did they feel about the process? Do they think it’s an appropriate system? This could be done as a large group, small group, or through individual reflection. Then turn the attention to this case and describe the change of perspective required here. Now instead of being the person who is asked for consent, you are the person requiring consent. Engineers are not lawyers, but engineers often are responsible for delivering legally compliant systems. If you were the engineer in charge in this case, what steps might you take to ensure consent is handled appropriately? This question could be answered in small groups, and then each group could report back to the larger class and a discussion could follow the report-backs.
4. Institutional Complexity. The questions listed in the case study relate to the fact that the building in which the facial recognition system will be used accommodates many different stakeholders. To help students with these questions, educators could divide the class into small groups, with each group representing one of the institutions or stakeholder groups (college, hospital, MTU, students, patients, public, etc.). Have each group investigate whether regulations related to captured images are different for their stakeholders, and debate if they should be different. What considerations will the engineer in the case have to account for related to that group? The findings can then be discussed as a large class.
Dilemma Part two – Discussion prompts:
The following questions relate to macroethical concerns, which means that the focus is on wider ethical contexts such as fairness, equality, responsibility, and implications.
1. Benefits and Burdens. To prepare to discuss the questions listed in the case study, students could make a chart of potential harms and potential benefits of the facial recognition system. They could do this individually, in pairs or small groups, or as a large class. Educators should encourage them to think deeply and broadly on this topic, and not just focus on the immediate, short-term implications. Once this chart is made, the questions listed in the case study could be discussed as a group, and students asked to weigh up these burdens and benefits. How did they make the choices as to when a burden should outweigh a benefit or vice versa?
2. Equality and Utility. To address the questions listed in the case study, students could do some preliminary individual or small group research on the accuracy of facial recognition systems for various population groups. The questions could then be discussed in pairs, small groups, or as a large class.
3. Engineer Responsibility. Engineers are experts that have much more specific technical knowledge and understanding than the general public. Indeed, the vast majority of people have no idea how a facial recognition system works and what the legal requirements are related to it, even if they are asked to give their consent. Does an engineer therefore have more of a responsibility to make people aware and reassure them? Or is an engineer just fulfilling their duty by doing what their boss says and making the system work? What could be problematic about taking either of those approaches?
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Activity: Stakeholder mapping to elicit value assumptions and motivations.
Author: Karin Rudolph (Collective Intelligence).
Overview:
This enhancement is for an activity found in point 5 of the Summary section of the case study.
What is stakeholder mapping?
Stakeholder mapping is the visual process of laying out the people, also called stakeholders, who are involved with or affected by an issue and can influence a project, product or idea.
What is a stakeholder?
Stakeholders are people, groups or individuals who have the power to affect or be affected by a process, project or product.
Mapping out stakeholders will help you to:
Identify the stakeholders you need to collaborate with to ensure the success of the project.
Understand the different perspectives and points of view people have and how these experiences can have an impact on your project or product.
Map out a wide range of people, groups or individuals that can affect and be affected by the project.
Stakeholder mapping:
The stakeholder mapping activity is a group exercise that provides students with the opportunity to discuss ethical and societal issues related to the School Chatbot case study. We recommend doing this activity in small groups of 6-8 students per table.
To carry out this activity, you will need the following resources:
1. Sticky notes (or digital notes if online).
2. A big piece of paper or digital board (Jamboard, Miro if online) divided into four categories:
Manage closely.
Keep satisfied.
Keep informed.
Monitor.
3. Markers and pencils.
The activity:
Begin the activity by asking the students to write a list of stakeholders involved in or affected by the school chatbot.
Once the list of stakeholders is ready, ask the students to allocate each stakeholder according to one of the four categories.
Board One
List of stakeholders:
Below is a list of the stakeholders involved in the Chatbot project. Put each stakeholder on a sticky note and add them to the stakeholders map, according to their level of influence and interest in the projects.
Top tip: use a different colour for each set of stakeholders.
School Chatbot – List of Stakeholders:
CEO
Developer
Student placement
Academy principal
Academy board
Academy students
Academy teachers
Teaching assistants
Parents
Funding body (Local Authority)
School administrators
School counsellors
Career advisors
Pastoral leads/wellbeing officers
Special needs (SEND) workers
GPs local practice
Media/local press
External staff contractors
Competitors (companies offering similar services).
Placement:
Place the sticky notes with the stakeholders you choose from the list and add them to the stakeholders’ map.
Map out your stakeholders on the grid to classify them by both their influence and interest in the project.
The position of the stakeholder on the grid shows you the actions that you could take with their engagement.
Guidance:
Each quadrant represents the following:
Category: Manage closely. These stakeholders have high power and are highly interested. Their level of interest is an opportunity to maximise the benefit of the project.
Category: Keep satisfied. These stakeholders have high power, but they are not very involved in the project.
Category: Keep informed. These stakeholders have low power but high interest. They are supporters of the project and can be helpful. Keep them involved.
Category: Monitor. These stakeholders have low power and low interest in the project. They only require monitoring.
Board One
Motivations, assumptions, ethical and societal risks:
Materials:
1. A big piece of paper or digital board (Jamboard, Miro if online) divided into four categories:
Stakeholders.
Motivations.
Assumptions.
Ethical/societal risks.
2. Sticky notes (or digital notes if online).
3. Markers and pencils.
The activity:
Using the sticky notes from the previous stakeholders activity, ask students to discuss some of the scenarios and situations that can arise as a result of using the School Chatbot.
Ask students to write these scenarios down and add them to Board 2, according to each category: stakeholders, motivations, assumptions, and ethical and societal risks.
Discuss some issues this project can bring and ask students to write them on sticky notes according to each category.
Board Two
The Board Two activity can be done in two different ways:
Option 1:
You can use some guiding questions to direct the discussion. For example:
What role does stakeholder X play in this project?
What is their main motivation?
Can you think of any obstacle or conflict this stakeholder could face?
What is their assumption about the use of the chatbot? (Positive or negative.)
What are the potential ethical/ societal risks of using a chatbot in this context?
Option 2:
We have already written some assumptions, motivations and ethical/societal risks and you can add these as notes on a table and ask students to place according to each category: stakeholders, motivations, assumptions, and ethical and societal risks.
Motivations:
To develop a new revenue stream.
To help in developing the product.
The chatbot could present potential risks to students.
To obtain commercial gain.
To help students get access to online support 24/7.
Assumptions:
The product will help the company and the students. Any criticism shouldn’t be considered.
We can use data from students without any constraints.
The chatbot will improve the performance of students.
The institution will take precautions to prevent harm to students.
Criticism won’t be well received and it can result in losing my job.
Potential ethical and societal risks:
Lack of due diligence. Not enough understanding of the steps required to process data in an ethical manner.
Privacy risks. Data subjects (students) didn’t consent to their data being used in this context.
Risk of manipulation. Lack of engagement with the potential users can result in a lack of understanding of their needs and expectations.
Risk of surveillance. Students could be under constant monitoring, resulting in lack of freedom.
Risk of profiling. Use of sensitive historical and current data can reinforce biases and existing inequalities.
Conflict of interest.
Lack of psychological safety. Employees don’t feel empowered to express their views.
Lack of transparency/explainability. Potential lack of processes in place to explain how the algorithm works and makes decisions.
Using a human name can give the false impression of friendship, which can be used to manipulate users.
Over reliance on digital tools. This can undermine support provided by health care professionals and career advisors.
Using a gendered name (Alice) can reinforce negative stereotypes (for example, females as assistants).
Change of expectations. Constant access to information and resources can change the expectations teachers have about the student’s behaviours.
Lack of human agency/control. Students could modify their behaviours due to constant monitoring, resulting in a lack of agency and control over their environment.
Lack of involvement in the consultation that might result in poor advice/content.
Job instability and potential job losses.
Risk of disempowerment. Parents can be left out without access to information about their children’s wellbeing and performance.
Lack of diversity/consideration of a variety of students – for example students with special needs, neurodivergent students – that can result in standardised or harmful advice.
Move and match:
In the board below, you will find sticky notes with a list of stakeholders, motivations, assumptions, and ethical and societal risks.
Move and organise the sticky notes to match each category.
Discuss your options with the rest of the team and add new ethical and societal risks as you go.
Reflection:
Ask students to choose 2- 4 sticky notes and explain why they think these are important ethical/societal risks.
Potential future activity:
A more advanced activity could involve a group discussion where students are asked to think about some mitigation strategies to minimise these risks.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Authors: Dr Matteo Ceriotti (University of Glasgow), Niven Payne (Fujitsu UK), Giulia Viavattene (University of Glasgow), Ellen Devereux (Fujitsu UK), Dr David Snelling (Fujitsu UK) and Matthew Nuckley (Fujitsu UK)
Abstract: A partnership between the University of Glasgow, Fujitsu UK, Astroscale and Amazon Web Services was established in response to a UK Space Agency call on Active Debris Removal mission design. This is the process of de-orbiting space debris objects from low Earth orbit with a dedicated spacecraft. The consortium brought together different but complementary expertise and tools to develop an algorithm (using machine learning and quantum-based computing) to design multiple-debris removal missions, able to select feasible sequences of debris objects among millions of permutations, in a fraction of the time of previous methods, and of better performance in terms of time and propellant required.
Overview
Space and its services have become part of everyone’s daily life, quietly. Things like mapping, geolocation, telecommunication services and weather forecast all depend on space assets. The continuous and increasing exploration and exploitation of space heavily depends on sustainability: defunct satellites and other spacecraft and launcher parts that became part of space debris population, or “junk”, increasing the threat of collision for current and future missions. There are 34,000 objects larger than 10 cm, and 130 million smaller than 1 cm, including non-operational satellites, upper stage rocket bodies, satellite parts, etc. Most of these objects are in the low Earth orbit region (below 1000 km), which is where most satellites operate.
Design of new satellites for demise prevents the creation of further debris. Active debris removal (ADR) aims dispose of debris objects that are currently in orbit. ADR actions require a “chaser” spacecraft to grapple a “non-cooperative” target, and transfer it to an orbit low enough that it will eventually de-orbit and burn in the atmosphere in a relatively short amount of time.
The idea
Many ADR missions would be required to make a substantial contribution in diminishing the debris population. The business challenge was to investigate how we could make space debris removal missions more commercially viable. This project investigated the feasibility, viability and design of removal and disposal of multiple debris objects using a single chaser spacecraft. The mission scenario involves a spacecraft that transfers to the orbit of one or more objects, captures it (or them), and then transfers to a lower orbit for release and disposal. At low altitude, the atmospheric drag will quickly cause the object to rapidly fall and burn in the atmosphere. In the meantime, the chaser spacecraft will transfer to another object (or set of objects) and continue the mission.
The problem
With million pieces of space junk, there are multiple trillions of permutations for ADR missions between these objects, that would need to be investigated, to efficiently remove even only a few of them. Since orbital transfers have no analytical closed-form solutions, an optimisation strategy must be used to find a solution to trajectory design problems, which is generally computationally demanding.
Our solution
The aim of this project was to make space debris removal missions more commercially viable, through a new solution that allows fast mission planning. First, an Artificial Neural Network (ANN) is trained to predict the cost of orbital transfer to and disposal of a range of debris objects quickly. Then, this information is used to plan a mission of four captures from candidate possible debris targets using Fujitsu’s quantum-inspired optimisation technology, called Digital Annealer (DA), by formulating the problem as a quadratic unconstrained binary optimisation. We used Astroscale’s mission planning data and expertise, and run the algorithms on the Amazon Web Services (AWS) Sagemaker platform. For technical details on our approach, the reader is referred to the publications below.
Outcomes
In a test-scenario, we showed that our solution produced a 25% faster mission, using 18% less propellant when compared to an expert’s attempt to plan the mission using the same assumptions; this was found 170,000 times faster than current methods based on an expert’s work.
Partnership
The project involved the partnership of four institutions, with areas of contributions described in the following diagram:
We believe the key to the success of the partnership was the different, but complementary areas of expertise, tools offered, and contribution of each partner into the project. It may be easier to rely on existing network of contacts, often with similar areas of expertise. However, this project shows that the additional effort of creating a new partnership can have great benefits, that overcome the initial difficulties.
Project set up
An initial contact between Fujitsu and UofG defined the original idea of the project, combining the existing expertise on discrete optimisation (Fujitsu) and multi-body space missions (UofG). The team was strengthened by expertise in active space debris removal (Astroscale) and cloud computing (AWS). The project proposal was funded by the United Kingdom Space Agency (UKSA), for a duration of four months, from September 2020 to January 2021.
Due to the on-going global pandemic, the project was run entirely online, with weekly meetings on Microsoft Teams. Fujitsu, as team lead, was responsible for planning and scheduling of tasks, as well as integration of code and reporting.
Lessons learned and reflections
Reactivity in preparing a project proposal was fundamental for the project: The very first contact between the partners was made at the end of July 2020, the proposal was submitted in mid-August and the project officially kicked-off in September.
Given the short timeframe, it was important to conceive a project proposal that fit the scope of the funder, but also matches with available expertise and personnel. It was also critical to frame the business challenge in the proposal.
From the point of view of the academic team, and again given the short window between notification of successful application and start of the project, these factors were crucial for the success of the project:
Immediate availability of an internal candidate as nominated Research Assistant – there would have been no time to open a new position and recruit externally.
An excellent researcher was particularly important, as there was no time to account for potential errors in the methods and their implementation.
A candidate with experience aligned with the project was sought – there would have been no time to train new staff.
A PhD student in the research group was the best candidate for the project: at the cost of taking a leave-of-absence from the PhD studentship, the project constituted a unique experience with industrial collaboration, enriched their CV through a ground-breaking project, added a conference and a journal paper to their track record, and eventually opened new areas of investigation for the rest of the PhD studentship.
It would have been probably unthinkable – or at not very credible – to deliver a project with new partners remotely without any in-person meeting before the pandemic; however, this turned out to be an enabler for this project, allowing to maximise time on actual development and save on travel costs.
Further information
G. Viavattene, E. Devereux, D. Snelling, N. Payne, S. Wokes, M. Ceriotti, Design of multiple space debris removal missions using machine learning, Acta Astronautica, 193 (2022) 277-286. DOI: 10.1016/j.actaastro.2021.12.051
D. Snelling, E. Devereux, N. Payne, M. Nuckley, G. Viavattene, M. Ceriotti, S. Wokes, G. Di Mauro, H. Brettle, Innovation in planning space debris removal missions using artificial intelligence and quantum-inspired computing, 8th European Conference on Space Debris, ESA/ESOC, Darmstadt, Germany (Virtual Conference), 2021.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.
Authors: Professor Sarah Hitt SFHEA (NMITE); Professor Raffaella Ocone OBE FREng FRSE (Heriot Watt University); Johnny Rich (Engineering Professors’ Council); Dr Matthew Studley (University of the West of England, Bristol); Dr Nik Whitehead (University of Wales Trinity Saint David); Dr Darian Meacham (Maastricht University); Professor Mike Bramhall (TEDI-London); Isobel Grimley (Engineering Professors’ Council).
Professional situations: Communication, Honesty, Transparency, Informed consent.
Educational level: Intermediate.
Educational aim: Practise ethical analysis. Ethical analysis is a process whereby ethical issues are defined and affected parties and consequences are identified so that relevant moral principles can be applied to a situation in order to determine possible courses of action.
Learning and teaching notes:
This case involves a software engineer who has discovered a potential data breach in a smart home community. The engineer must decide whether or not to report the breach, and then whether to alert and advise the residents. In doing so, considerations of the relevant legal, ethical, and professional responsibilities need to be weighed. The case also addresses communication in cases of uncertainty as well as macro-ethical concerns related to ubiquitous and interconnected digital technology.
This case study addresses two of AHEP 4’s themes: The Engineer and Society (acknowledging that engineering activity can have a significant societal impact) and Engineering Practice (the practical application of engineering concepts, tools and professional skills). To map this case study to AHEP outcomes specific to a programme under these themes, access AHEP 4here and navigate to pages 30-31 and 35-37.
The dilemma in this case is presented in two parts. If desired, a teacher can use Part one in isolation, but Part two develops and complicates the concepts presented in Part one to provide for additional learning. The case allows teachers the option to stop at multiple points for questions and/or activities as desired
Learners will have the opportunity to:
analyse the ethical dimensions of an engineering situation;
identify professional responsibilities of engineers in an ethical dilemma;
determine and defend a course of action in response to an ethical dilemma;
practise professional communication;
debate possible solutions to an ethical dilemma.
Teachers will have the opportunity to:
highlight professional codes of ethics and their relevance to engineering situations;
address approaches to resolve interpersonal and/or professional conflict;
integrate technical content on software and/or cybersecurity;
informally evaluate students’ critical thinking and communication skills.
Smart homes have been called “the road to independent living”. They have the potential to increase the autonomy and safety of older people and people with disabilities. In a smart home, the internet of things (IoT) is coupled with advanced sensors, chatbots and digital assistants. This combination enables residents to be connected with both family members and health and local services, so that if there there are problems, there can be a quick response.
Ferndale is a community of smart homes. It has been developed at considerable cost and investment as a pilot project to demonstrate the potential for better and more affordable care of older people and people with disabilities. The residents have a range of capabilities and all are over the age of 70. Most live alone in their home. Some residents are supported to live independently through: reminders to take their medication; prompts to complete health and fitness exercises; help completing online shopping orders and by detecting falls and trips throughout the house. The continuous assessment of habits, diet and routines allows the technology to build models that may help to predict any future negative health outcomes. These include detecting the onset of dementia or issues related to dietary deficiencies. The functionality of many smart home features depends on a reliable and secure internet connection.
Dilemma – Part one:
You are the software engineer responsible for the integrity of Ferndale’s system. During a routine inspection you discover several indicators suggesting a data breach may have occurred via some of the smart appliances, many of which have cameras and are voice-activated. Through the IoT, these appliances are also connected to Amazon Ring home security products – these ultimately link to Amazon, including supplying financial information and details about purchases.
Optional STOP for questions and activities:
1. Activity: Technical analysis – Before the ethical questions can be considered, the students might consider a number of immediate technical questions that will help inform the discussion on ethical issues. A sample data set or similar technical problem could be used for this analysis.For example:
Is it possible to ascertain whether a breach has actually happened and data has been accessed?
What data may have been compromised?
Is a breach of this kind preventable, and could it be better prevented in the future?
Has the security been subject to a hack or is the data not secure?
Has the problem now been rectified, and all data secured?
2. Activity: Identify legal and ethical issues. The students should reflect on what might be the immediate ethical concerns of this situation. This could be done in small groups or a larger classroom discussion.
Possible prompts:
Is there a risk that the breach comprised the residents’ personal details, financial information or even allowed remote and secret control of cameras? What else could have been compromised and what are the risks of these compromises? Are certain types of data more risky when breached than others? Why?
What are the legal implications if there has been a breach? Do you, as a software engineer, have any duty to the residents at this point?
At the stage where the breach and its potential implications are unknown, should you tell the community and, if so, what should you say? Some residents aren’t always able to understand the technology or how it works, so they may be unlikely to recognise the implications of situations like this. Should you worry that it might cause them distress or create distrust in the integrity of the whole system if the possible data breach is revealed?
At the stage where the breach and its potential implications are unknown, is there anyone else you should inform? What should you tell them? Are there any risks you may be able to mitigate immediately? How?
Who owns the data collected on a person living in a smart home? What should happen to it after that person dies?
3. Activity: Determine the wider ethical context. Students should consider what wider moral issues are raised by this situation. This could be done in small groups or a larger classroom discussion.
Possible prompts:
When engineered products or systems go wrong, what is our responsibility to tell the people affected?
What is our right to privacy? Can, or should, it be traded away or sacrificed for another good? Who gets to decide?
Are smart homes a good thing if their technology is always going to present privacy risks? Should the technology be limited in some way?
The homes in this case are inhabited by senior citizens with disabilities. Do we owe a different level of care to these people than others? Why? Should engineers working on software for these homes employ a duty of care in a different way than they would in software for homes for young able-bodied professionals? Why? Should a duty of care be delivered by people who have the capacity to care in the emotional sense?
Should individuals have the ability to determine their own level of risk and choose what functionality to accept based on this risk? Should technology enable these kinds of choices?
Should engineers be held responsible for unsafe systems? If not, who is responsible?
Dilemma – Part two:
You send an email to Ferndale’s manager about the potential breach, emphasising that the implications are possibly quite serious. She replies immediately, asking that you do not reveal anything to anyone until you are absolutely certain about what has happened. You email back that it may take some time to determine if the software security has been compromised and if so, what the extent of the breach has been. She replies explaining that she doesn’t want to cause a panic if there is nothing to actually worry about and says “What you don’t know won’t hurt you.” How do you respond?
Optional STOP for questions and activities:
1. Discussion: Professional values – What guidance is given by codes of ethics such as the Royal Academy of Engineering/Engineering Council’s Statement of Ethical Principles or the Association for Computing Machinery Code of Ethics?
2. Activity: Map possible courses of action. The students should think about the possible actions they might take. They can be prompted to articulate different approaches that could be adopted, such as the following, but also develop their own alternative responses.
Do nothing. Tell no one. Try to improve the security to avoid future breaches.
Shut down the smart home technology until any, and all, risks can be mitigated.
Explain the situation fully to the residents, detailing subsequent risks for the future and steps they should take to mitigate the risks themselves.
Offer a partial explanation of the situation, the solutions proposed (or carried out) and reassure them that everything is in order.
3. Activity: Hold a debate on which is the best approach and why. The students should interrogate the pros and cons of each possible course of action including the ethical, technical, and financial implications. They should decide on their own preferred course of action and explain why the balance of pros and cons is preferable to other options.
4. Activity: Role-play a conversation between the engineer and the manager, or a conversation between the engineer and a resident.
5. Discussion: consider the following questions:
What is the role of robotics and artificial intelligence in caring for people in the future?
Is there a limit to what data should be shared and is it justified to use other people’s data for profit?
Could people like Ferndale’s residents be exploited through access to their data? How?
What more could be achieved through the use of data and connectivity to care for older or ill people, in their homes or hospitals, and what additional safeguards should be put in place?
6. Activity: Change perspectives. Imagine that you are the child of one of Ferndale’s residents and that you get word of the potential data security breach. What would you hope the managers and engineers would do?
7. Activity: Write a proposal on how the system might be improved to stop this happening in the future or to mitigate unavoidable risks. To inform the proposal, the students should also explore the guidance of what might be best practice in this area. For example, in this instance, they may decide on a series of steps.
Use human care providers to inform and explain to residents (or their families) about digital security.
Deploy a more rigorous security protocol as well as a programme of regular testing and updates to minimise the risk of the situation occurring again.
Shut down systems where the risks outweigh the potential benefits.
Instigate a reporting procedure and a chain of command for decision-making in the future.
Any views, thoughts, and opinions expressed herein are solely that of the author(s) and do not necessarily reflect the views, opinions, policies, or position of the Engineering Professors’ Council or the Toolkit sponsors and supporters.