DfE call for evidence on Generative AI in education consultation response

The final report has now been published by the Department for Education (DfE) and can be found here.

The EPC’s Education, Employability and Skills Committee responded to The Department for Education (DfE) position on generative AI in and subsequent call for evidence. This was informed by a full member consultation – led by Manish Malik (Canterbury Christ Church University) and Paul Greening (Coventry University) – to enable the EPC to draw together evidence from across the Engineering Academics Network and to remind the DfE that universities are leading the way in the development of generative AI in education.

We highlighted evidence from the network of ways AI is used in Engineering higher education; the positive outcomes of greater use and sophistication of generative AI; ways in which the DfE could support the possibilities; and challenges that need efforts from all involved.

In summary

We would expect to see the following positive outcomes as a result of the greater use and sophistication of generative AI:

  • An increase is productivity for staff and students, which would extend to the future workforce. This means companies would expect expert use of these systems and better outcomes from their employees. This will have implications for higher education institutions and lifelong learning circuits. For example, staff may use AI (Artificial Intelligence) for assessment work, may even automate viva and really take the role of ensuring the interactions between the learners and the systems are meaningful and the students are actually learning, by being the human in the loop.
  • A beneficial effect of the above could be to improve Just, Equitable, Diverse and Inclusive practices in higher engineering education. For example, compassionate, relatable and accessible feedback, content and assessments.
  • Support staff and students with better decision-making and better student outcomes (retention, progression, closing gaps etc) as Learning Analytics and AI come together.

DfE could support the investigation of the possibilities and positiveness of:

  • An increase in personalised learning opportunities with AI chatbots and AI Tutors. Students in future may learn and be assessed in different settings than those they currently experience. Personalised tests may become more common, but care should be taken to ensure they are equitable and assess the learning outcomes for which the course is designed. Staff must play the human in loop role here to ensure this is what is happening.
  • AI is increasingly present in many design tools as this may hinder the development of critical skills needed by engineers. If the engineers have full appreciation of the design, its underlying theories and limits of the tools, this should be encouraged. In fact, many virtual / simulation tools may become a useful source of data for AI systems to model human interactions with machines being modelled in these tools.
  • An increase in supporting the development of self-, co- and shared regulation of learning with AI-led orchestration of individual, pair and small group dynamics and learning within engineering education. If AI is tapped into correctly, it will certainly add to what has been achieved and extend it to many other scenarios. Staff’s role here would be to set up such a system to do what they would normally focus on when orchestrating their class interactions etc, without the technology and instead do what the technology cannot do.

Challenges that need efforts from all involved:

  • Ensuring data used in such systems is not bad data with bias.
  • Data and privacy concerns and AI systems are secure, so they are not misused.
  • Staff and students need to improve their awareness of what behaviours, skills and knowledge will be in demand for their current and future jobs, respectively.
  • AI raises concerns about academic integrity, but these can be mitigated, as mentioned above, by AI assessing students multiple times and academics always acting as humans in the loop.
  • Trusting such systems can be either too easy due to their magical feel or too difficult for many and something needs to be done to moderate both ends.
  • Increasing chasm between Humanities and Technology proponents need to be bridged so the systems are more ethical, sustainable and understood both from the point of view of what is possible but also from the point of view of what is the consequence to society.
The ways AI has been used in Engineering HE and the specific tools used
T

EPC (Engineering Professors Council) members have reported using generative AI in higher education engineering in the following ways:

Non-technical work:

  • To help create pictures for lecture notes
  • To show students how to use it to test their understanding of core programming concepts
  • To teach assessment literacy by getting students to mark and give feedback on solutions created by generative AI.
  • To generate personalized learning paths for students based on their individual strengths, weaknesses, and learning styles.
  • To support the teacher for marking and grading assignments and exams, saving time and ensuring consistent objective grading. E.g., Turnitin’s
  • To assist researchers to find relevant literature, identify trends in data and themes, and even generate hypotheses. E.g., Semantic Scholar
  • To support students. E.g.AI chatbots providing 24/7 support to students, answering frequent questions and providing information.
  • To increase staff productivity
  •  To raise awareness of generative AI and its capabilities amongst colleagues
  • To support removal of known bias to help with improving the inclusiveness of teaching, learning and assessment material.
  • To make student feedback, learning, teaching and assessment materials more user friendly and inclusive
  • To explore concepts and to ask for explanations
  • To guide less confident students to become more confident over cycles of use and feedback from the staff.

Engineering technical work:

  • To create and optimize designs based on specific constraints and objectives. E.g., Autodesk’s generative design software uses AI to generate a wide range of design alternatives, optimizing for factors like weight, strength, material usage, and cost.
  • To predict equipment failures by analysing patterns in historical data, helping to prevent downtime and reduce maintenance costs. E.g., IBM’s Watson IoT offer predictive maintenance capabilities.
  • To generate simulations of complex engineering systems and test their performance under different conditions. E.g., ANSYS AI-driven simulation tools.
  • Specific prompt engineering. E.g., Bard, ChatGPT and Bing.
The main challenges faced in using generative AI and how these were addressed.

Reliability of data, timeliness in innovative work, safe and appropriate content concerns.

  • Keeping academics as humans in the loop. Techniques like reinforcement learning from human feedback (RLHF) can help improve control over these models.
  • Training

Bias.

  • Carefully curation and review training data for potential biases
  • Using techniques like fairness-aware machine learning.
  • Academically supported prompt engineering by someone trained on subconscious bias.

Data and privacy concerns and security

  • Some universities have developed university wide tools, hooking into the ChatGPT API, to give access, but prevent personal data from being stored.

Hesitance. Some students thought they do not want to be seen to be using it or do not want to create accounts on such tools and be mapped by machines.

Academic Integrity and misuse. Students are not sure about the boundaries for when it can be considered cheating.

  • Introduction of robust ethical guidelines for AI use

Hindering the development of critical skills needed by engineers.

  • Ensure engineers have full appreciation of the design, its underlying theories and limits of the tools.
How generative AI could be used to improve education

To increase productivity for staff and students.
To improving engagement and effectiveness. This can help students learn more effectively and at their own pace.
To improve accessibility: for example, to make education more accessible for students with disabilities.
To enable more equitable, diverse and inclusive practices. For example, compassionate, relatable and accessible feedback, content and assessments.
To support better decision making and better student outcomes (retention, progression, closing gaps etc) as Learning Analytics and AI come together.
To help students understand where they need to improve by providing detailed feedback on student work, pointing out areas of strength and weakness

Subject specific comments

Generative AI is a powerful tool for enhancing engineering higher education, but it should be used responsibly and ethically, with all the potential risks effectively managed. Engineering will undoubtedly embrace the use of generative AI.

Generative AI can produce novel and creative outputs, from new product designs to works of art. This can lead to innovation in a wide range of engineering fields.

In engineering, many virtual / simulation tools may become a useful source of data for AI systems to model human interactions with machines being modelled in these tools.

In engineering, generative AI can also be used to:

  • create and optimize designs. For example, in structural engineering, AI can generate a multitude of design options based on specified parameters and constraints, and then optimize these designs for factors such as strength, cost, and material usage.
  • analyse data from machinery and equipment to predict when maintenance or repairs might be needed. This can help prevent equipment failure and downtime, saving time and money.
  • create accurate simulations and models of engineering systems. This can help engineers test designs and make improvements without the need for physical prototypes.
  • generate code, automate testing, and identify bugs. This can make the software development process more efficient and reduce the likelihood of errors.
  • predict the properties of new materials, or to design new materials with desired properties. This can accelerate the development of new materials and technologies.
  • To analyse project data to predict outcomes, identify potential issues, and suggest optimizations. This can help improve project management and increase its success.

An increasing chasm between Humanities and Technology proponents need to be bridged so the systems are more ethical, sustainable and understood both from the point of view of what is possible but also from the point of view of what is the consequence to society.

Concerns and risks

Data, privacy and security concerns, including the potential for AI to be used for surveillance or control
Misinformation and misuse: There’s a risk that generative AI could be used to create misleading, biased or harmful content.
Ensuring data used in such systems is not bad data with bias.
Resource consumption as training large generative AI models can be computationally intensive and consume a lot of energy.
No more (or minimized) human interaction leading to job displacement and a depersonalized education system
Over-reliance: AI could lead to a lack of critical thinking or a loss of important skills.
A risk that AI could potentially widen the digital divide.

As AI tools such as ChatGPT, Bard, Bing and many others surface and develop further, they remain prone to making errors in their generative content and cannot be relied upon fully without human expert intervention. This is also true for humans learning at a university. It has become clear that the role of an academic thus is to check the understanding of their students irrespective of the source used.

Ethical and legal considerations

The use of AI raises a host of ethical and regulatory issues, from privacy concerns to the potential for misuse.

The increasing chasm between Humanities and Technology proponents need to be bridged so the systems are more ethical, sustainable and understood. both from the point of view of what is possible but also from the point of view of what is the consequence to society.

The use of generative AI should be accompanied by robust ethical guidelines in place for its use.

Some universities have developed university wide tools, hooking into the ChatGPT API, to give access, but prevent personal data from being stored.

Future predictions and enabling use.

Teaching students how to include generative AI in their workflow.
Students learning and being assessed in different settings.
Creating personalized content tailored to individual users.
Personalised tests.
Analysing and learning from substantial amounts of data.
Creating virtual tutors that can assist students with their studies outside of traditional class hours.
Generating educational content, such as lecture notes, tutorials, laboratory activities, etc.
Assisting with research by analysing substantial amounts of data and generating insights and new ideas.
Improving accessibility.
Analysing students’ data and their performance, retention and graduation rates.
Extending to future workforce. Companies would expect expert use of these systems and better outcomes from their employees

Support for education staff, pupils, parents or other stakeholders need to be able to benefit from this

technology

Staff and students need to understand how to use AI tools effectively. This includes technical training on how to use the tools, as well as education about the potential benefits and risks of AI, and how to use AI responsibly.

Institutions need to have the necessary resources and infrastructure to support the use of AI.

Staff and students need to be assured that their data is being used responsibly and that their privacy is being protected. This could involve implementing strong data protection measures and educating staff and students about these measures.

Clear guidelines and policies can help ensure that AI is used responsibly, ethically and with academic integrity. Staff and students need to understand these guidelines and policies, and there needs to be mechanisms in place to enforce them.

Support for accessibility to ensure that all students can benefit from AI, support may be needed to make AI tools accessible to students with disabilities. This could involve using AI tools that are compatible with assistive technologies, or providing additional support to students who need it.

Staff and students need to improve their awareness of what behaviours, skills and knowledge will be in demand for their current and future jobs, respectively.

Development of trust. Generative AI systems can be either too easy due to their magical feel or too difficult for many and something needs to be done to moderate both ends.

Activities we would like to see the Department for Education undertaking to support generative AI tools

being used safely and effectively in education.

Develop Clear Guidelines and Regulations: These could cover areas such as data privacy, bias prevention, and ethical use of AI.
Promote Transparency: Encourage institutions to be transparent about their use of AI, including what AI systems are being used, what data they are using, and how they are being used. This can help ensure accountability and build trust among students and educators.
Provide Training and Resources: to help educators understand how to use AI tools effectively and responsibly. This could include technical training, as well as training on the ethical implications of AI.
Support Research: into the use of AI in education, including studies on its effectiveness, its impact on students, and best practices for its use.
Promote Accessibility and Equity: ensure that all students and institutions have access to AI tools, regardless of their resources/economic background. This could involve providing funding or resources to lower income institutions or developing policies to ensure equitable access to AI.
Foster Collaboration: Encourage collaboration between educational institutions, AI developers, and other stakeholders. This can help ensure that AI tools are developed and used in a way that meets the needs of educators and students.
Monitor and Evaluate AI Use: Regularly monitor and evaluate the use of AI in higher education to identify any issues or areas for improvement. This could involve collecting feedback from students and educators or conducting audits of AI use.
Support the investigation of the possibilities and positiveness of:

  • An increase in personalised learning opportunities with AI chatbots and AI Tutors. Students in future may learn and be assessed in different settings than those they currently experience. Personalised tests may become more common, but care should be taken to ensure they are equitable and assess the learning outcomes for which the course is designed. Staff must play the human in loop role here to ensure this is what is happening.
  • AI is increasingly present in many design tools as this may hinder the development of critical skills needed by engineers. If the engineers have full appreciation of the design, its underlying theories and limits of the tools, this should be encouraged. In fact, many virtual / simulation tools may become a useful source of data for AI systems to model human interactions with machines being modelled in these tools.
  • An increase in supporting the development of self-, co- and shared regulation of learning with AI-led orchestration of individual, pair and small group dynamics and learning within engineering education. If AI is tapped into correctly, it will certainly add to what has been achieved and extend it to many other scenarios. Staff’s role here would be to set up such a system to do what they would normally focus on when orchestrating their class interactions etc, without the technology and instead do what the technology cannot do.
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
Related articles

All in for Engineering: co-designing a neuro-inclusive future

The EPC believes that an inclusive and positive university experience for neurodiverse engineering students is fair, healthy, and what the...

News

Spotlight on ethics: Facial recognition for access and monitoring

This case involves an engineer hired to manage the development and installation of a facial recognition project.

Toolkits
Let us know what you think of our website