Step 1b: Contextual Framework Pass (Discussion)

Extract roles, states, and resources from the discussion section

Use of Artificial Intelligence in Engineering Practice
Step 1 of 5

Discussion Section

Section Content:
Discussion:
The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally.
Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category.
The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly.
However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research.
The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances.
Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools.
The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system.
The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI.
The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs.
None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI].
The BER considers the change to CAD to merely represent a drafting enhancement.
The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology.
As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement.
That translates to a scenario for potential liability.
In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control.
The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution.
Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more?
Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks.
Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception.
Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies.
Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here.
Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff.
Per Code section II.1.c, confidential information can only be shared with prior consent of the Client.
While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions.
When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain.
The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain.
Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority.
Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations.
Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical.
In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market.
Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions.
When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted.
Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously.
BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work.
The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience.
For instance, never designed a highway before?
No problem.
Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software.
In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case.
They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances.
Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment.
Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing.
While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations.
To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent.
The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight.
Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement.
Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence.
The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”.
Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns.
AI-generated technical work requires at least the same level of scrutiny as human-created work.
Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a.
NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion.
Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority.
Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate.
The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement.
Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints.
Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood.
Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted.
These are steps that, in this case, Engineer A chose not to follow.
While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency.
BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others.
AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product.
Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust.
Roles Extraction
LLM Prompt
DUAL ROLE EXTRACTION - Professional Roles Analysis EXISTING ROLE CLASSES IN ONTOLOGY: - Employer Relationship Role: Organizational relationship balancing loyalty and independence - Engineer Role: A professional role involving engineering practice and responsibilities - Participant Role: A role of an involved party or stakeholder that does not itself establish professional obligations ( - Professional Peer Role: Collegial relationship with mentoring and review obligations - Professional Role: A role within a profession that entails recognized ends/goals of practice (e.g., safeguarding public - Provider-Client Role: Service delivery relationship with duties of competence and care - Public Responsibility Role: Societal obligation that can override other professional duties - Role: A role that can be realized by processes involving professional duties and ethical obligations. This - Stakeholder Role: A participant role borne by stakeholders such as Clients, Employers, and the Public. Typically not t === TASK === From the following case text (discussion section), extract information at TWO levels: LEVEL 1 - NEW ROLE CLASSES: Identify professional roles that appear to be NEW types not covered by existing classes above. Look for: - Specialized professional functions - Emerging role types in engineering/technology - Domain-specific professional positions - Roles with unique qualifications or responsibilities For each NEW role class, provide: - label: Clear professional role name - definition: Detailed description of role function and scope - distinguishing_features: What makes this role unique/different - professional_scope: Areas of responsibility and authority - typical_qualifications: Required education, licensing, experience - generated_obligations: What specific duties does this role create? - associated_virtues: What virtues/qualities are expected (integrity, competence, etc.)? - relationship_type: Provider-Client, Professional Peer, Employer, Public Responsibility - domain_context: Engineering/Medical/Legal/etc. - examples_from_case: How this role appears in the case text - source_text: EXACT text snippet from the case where this role class is first identified or described (max 200 characters) LEVEL 2 - ROLE INDIVIDUALS: Identify specific people mentioned who fulfill professional roles. For each person: - name: EXACT name or identifier as it appears in the text (e.g., "Engineer A", "Client B", "Dr. Smith") - role_classification: Which role class they fulfill (use existing classes when possible, or new class label if discovered) - attributes: Specific qualifications, experience, titles, licenses mentioned in the text - relationships: Employment, reporting, collaboration relationships explicitly stated - Each relationship should specify: type (employs, reports_to, collaborates_with, serves_client, etc.) and target (person/org name) - active_obligations: What specific duties is this person fulfilling in the case? - ethical_tensions: Any conflicts between role obligations and personal/other obligations? - case_involvement: How they participate in this case - source_text: EXACT text snippet from the case where this individual is first mentioned or described (max 200 characters) IMPORTANT: Use ONLY the actual names/identifiers found in the case text. DO NOT create realistic names or make up details not explicitly stated. CASE TEXT: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Respond with valid JSON in this format: { "new_role_classes": [ { "label": "Environmental Compliance Specialist", "definition": "Professional responsible for ensuring projects meet environmental regulations and standards", "distinguishing_features": ["Environmental regulation expertise", "Compliance assessment capabilities", "EPA standards knowledge"], "professional_scope": "Environmental impact assessment, regulatory compliance review, permit coordination", "typical_qualifications": ["Environmental engineering degree", "Regulatory compliance experience", "Knowledge of EPA standards"], "generated_obligations": ["Ensure regulatory compliance", "Report violations", "Maintain environmental standards"], "associated_virtues": ["Environmental stewardship", "Regulatory integrity", "Technical competence"], "relationship_type": "Provider-Client", "domain_context": "Engineering", "examples_from_case": ["Engineer A was retained to prepare environmental assessment", "specialist reviewed compliance requirements"], "source_text": "Engineer A was retained to prepare environmental assessment" } ], "role_individuals": [ { "name": "Engineer A", "role_classification": "Environmental Compliance Specialist", "attributes": { "title": "Engineer", "license": "professional engineering license", "specialization": "environmental engineer", "experience": "several years of experience" }, "relationships": [ {"type": "retained_by", "target": "Client W"} ], "case_involvement": "Retained to prepare comprehensive report addressing organic compound characteristics", "source_text": "Engineer A, a professional engineer with several years of experience, was retained by Client W" } ] }
Saved: 2026-01-05 18:48
States Extraction
LLM Prompt
EXISTING STATE CLASSES IN ONTOLOGY (DO NOT RE-EXTRACT THESE): STATE STATES: - AI Tool Inexperience State: A state where a professional is using AI tools without prior experience or full understanding of their functionality, accuracy, and limitations - AI Tool Reliance State: A state where a professional is using AI-generated content or tools for technical work without full verification processes - Certification Required State: Checkpoint state requiring formal validation processes - Client Risk Acceptance State: A state where a client has been fully informed of specific risks to vulnerable populations but chooses to proceed without mitigation measures - Climate Resilience Policy State: A state where an organization has formal policies requiring infrastructure projects to incorporate climate change resilience and sustainability considerations - Competing Duties State: State requiring ethical prioritization between conflicting obligations - Confidentiality Breach State: A state where client confidential information has been exposed to unauthorized parties or systems without prior consent - Conflict of Interest State: Professional situation where personal and professional interests compete - Disproportionate Impact Discovery State: A state where a professional has discovered that a proposed solution would disproportionately harm a specific vulnerable population under certain conditions - Insufficient Attribution State: A state where substantial contributions to work product from AI or other sources are not properly acknowledged or cited - Make Objective Truthful Statements: Requirement for honesty in professional communications - Mentor Absence State: A state where a professional lacks access to their established mentor or supervisor for guidance and quality assurance, affecting their confidence and work processes - Non-Compliant State: State requiring compliance remediation - Non-Compliant State: Problematic state requiring immediate corrective action - Objective and Truthful Statements: Requirement for honesty in professional communications - Professional Position Statement: Official position statements from professional organizations defining key concepts and standards - Provide Objective Statements: Professional communication standard - Public Statements: Requirement for honesty and objectivity in all public communications and professional statements - Regulatory Compliance State: Legal compliance context constraining actions - Stakeholder Division State: A state where stakeholder groups have expressed conflicting preferences for different technical solutions, creating competing pressures on professional decision-making - State: A quality representing conditions that affect ethical decisions and professional conduct. This is the S component of the formal specification D=(R,P,O,S,Rs,A,E,Ca,Cs). - Technical Writing Insecurity State: A state where a professional lacks confidence in a specific technical skill area despite having expertise in other aspects of their field IMPORTANT: Only extract NEW state types not listed above! You are analyzing a professional ethics case to extract both STATE CLASSES and STATE INSTANCES. DEFINITIONS: - STATE CLASS: A type of situational condition (e.g., "Conflict of Interest", "Emergency Situation", "Resource Constraint") - STATE INDIVIDUAL: A specific instance of a state active in this case attached to specific people/organizations CRITICAL REQUIREMENT: Every STATE CLASS you identify MUST be based on at least one specific STATE INDIVIDUAL instance in the case. You cannot propose a state class without providing the concrete instance(s) that demonstrate it. KEY INSIGHT FROM LITERATURE: States are not abstract - they are concrete conditions affecting specific actors at specific times. Each state has a subject (WHO is in the state), temporal boundaries (WHEN), and causal relationships (WHY). YOUR TASK - Extract two LINKED types of entities: 1. NEW STATE CLASSES (types not in the existing ontology above): - Novel types of situational states discovered in this case - Must be sufficiently general to apply to other cases - Should represent distinct environmental or contextual conditions - Consider both inertial (persistent) and non-inertial (momentary) fluents 2. STATE INDIVIDUALS (specific instances in this case): - Specific states active in this case narrative - MUST be attached to specific individuals or organizations in the case - Include temporal properties (when initiated, when terminated) - Include causal relationships (triggered by what event, affects which obligations) - Map to existing classes where possible, or to new classes you discover EXTRACTION GUIDELINES: For NEW STATE CLASSES, identify: - Label: Clear, professional name for the state type - Definition: What this state represents - Activation conditions: What events/conditions trigger this state - Termination conditions: What events/conditions end this state - Persistence type: "inertial" (persists until terminated) or "non-inertial" (momentary) - Affected obligations: Which professional duties does this state affect? - Temporal properties: How does this state evolve over time? - Domain context: Medical/Engineering/Legal/etc. - Examples from case: Specific instances showing this state type For STATE INDIVIDUALS, identify: - Identifier: Unique descriptor (e.g., "John_Smith_ConflictOfInterest_ProjectX") - State class: Which state type it represents (existing or new) - Subject: WHO is in this state (person/organization name from the case) - Initiated by: What event triggered this state? - Initiated at: When did this state begin? - Terminated by: What event ended this state (if applicable)? - Terminated at: When did this state end (if applicable)? - Affects obligations: Which specific obligations were affected? - Urgency/Intensity: Does this state's urgency change over time? - Related parties: Who else is affected by this state? - Case involvement: How this state affected the case outcome CASE TEXT FROM discussion SECTION: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Respond with a JSON structure. Here's a CONCRETE EXAMPLE showing the required linkage: EXAMPLE (if the case mentions "Engineer A faced a conflict when discovering his brother worked for the contractor"): { "new_state_classes": [ { "label": "Family Conflict of Interest", "definition": "A state where a professional's family relationships create potential bias in professional decisions", "activation_conditions": ["Discovery of family member involvement", "Family member has financial interest"], "termination_conditions": ["Recusal from decision", "Family member withdraws"], "persistence_type": "inertial", "affected_obligations": ["Duty of impartiality", "Disclosure requirements"], "temporal_properties": "Persists until formally addressed through recusal or disclosure", "domain_context": "Engineering", "examples_from_case": ["Engineer A discovered brother worked for ABC Contractors"], "source_text": "Engineer A faced a conflict when discovering his brother worked for the contractor", "confidence": 0.85, "rationale": "Specific type of conflict not covered by general COI in existing ontology" } ], "state_individuals": [ { "identifier": "EngineerA_FamilyConflict_ABCContractors", "state_class": "Family Conflict of Interest", "subject": "Engineer A", "initiated_by": "Discovery that brother is senior manager at ABC Contractors", "initiated_at": "When bidding process began", "terminated_by": "Engineer A recused from contractor selection", "terminated_at": "Two weeks after discovery", "affects_obligations": ["Maintain impartial contractor selection", "Disclose conflicts to client"], "urgency_level": "high", "related_parties": ["Client B", "ABC Contractors", "Engineer A's brother"], "case_involvement": "Led to Engineer A's recusal from contractor selection process", "source_text": "Engineer A discovered his brother is senior manager at ABC Contractors during the bidding process", "is_existing_class": false, "confidence": 0.9 } ] } YOUR RESPONSE FORMAT (use the same structure with YOUR case's specific details): { "new_state_classes": [ // For each new state type you discover ], "state_individuals": [ // For each specific instance in the case (MUST have at least one per new class) ] } EXTRACTION RULES: 1. For EVERY new state class you identify, you MUST provide at least one corresponding state individual 2. State individuals MUST have a clear subject (specific person/organization from the case) 3. If you cannot identify a specific instance, do not create the state class 4. States without subjects are invalid (e.g., cannot have "general emergency" - must be "City M's water emergency") 5. Each state individual should clearly demonstrate why its state class is needed Focus on states that: 1. Are attached to specific individuals or organizations mentioned in the case 2. Have clear temporal boundaries (when initiated, when terminated) 3. Affect specific ethical obligations or professional duties 4. Show causal relationships with events in the case 5. Demonstrate the context-dependent nature of professional ethics EXAMPLE OF CORRECT EXTRACTION: State Class: "Public Health Risk State" State Individual: "City_M_PublicHealthRisk_2023" with subject="City M", initiated_by="Decision to change water source", affects_obligations=["Ensure public safety", "Provide clean water"] EXAMPLE OF INCORRECT EXTRACTION: State Class: "Emergency Situation" with NO corresponding individual (INVALID - no specific instance)
Saved: 2026-01-05 18:49
Resources Extraction
LLM Prompt
EXISTING RESOURCE CLASSES IN ONTOLOGY (DO NOT RE-EXTRACT THESE): - Legal Resource: Legal framework constraining professional practice - Resource: An independent continuant entity that serves as input or reference for professional activities. This is the Rs component of the formal specification D=(R,P,O,S,Rs,A,E,Ca,Cs). - Resource Constrained: Resource limitation affecting available actions - Resource Constraint: Limitations on available time, budget, materials, or human resources (Ganascia 2007) - Resource Type: Meta-class for specific resource types recognized by the ProEthica system - Resources Available: Resource sufficiency enabling full options IMPORTANT: Only extract NEW resource types not listed above! You are analyzing a professional ethics case to extract both RESOURCE CLASSES and RESOURCE INSTANCES. DEFINITIONS: - RESOURCE CLASS: A type of document, tool, standard, or knowledge source (e.g., "Emergency Response Protocol", "Technical Specification", "Ethics Code") - RESOURCE INDIVIDUAL: A specific instance of a resource used in this case (e.g., "NSPE Code of Ethics 2023", "City M Water Quality Standards") CRITICAL REQUIREMENT: Every RESOURCE CLASS you identify MUST be based on at least one specific RESOURCE INDIVIDUAL instance in the case. You cannot propose a resource class without providing the concrete instance(s) that demonstrate it. YOUR TASK - Extract two LINKED types of entities: 1. NEW RESOURCE CLASSES (types not in the existing ontology above): - Novel types of resources discovered in this case - Must be sufficiently general to apply to other cases - Should represent distinct categories of decision-making resources - Consider documents, tools, standards, guidelines, databases, etc. 2. RESOURCE INDIVIDUALS (specific instances in this case): - Specific documents, tools, or knowledge sources mentioned - MUST have identifiable titles or descriptions - Include metadata (creator, date, version) where available - Map to existing classes where possible, or to new classes you discover EXTRACTION GUIDELINES: For NEW RESOURCE CLASSES, identify: - Label: Clear, professional name for the resource type - Definition: What this resource type represents - Resource type: document, tool, standard, guideline, database, etc. - Accessibility: public, restricted, proprietary, etc. - Authority source: Who typically creates/maintains these resources - Typical usage: How these resources are typically used - Domain context: Medical/Engineering/Legal/etc. - Examples from case: Specific instances showing this resource type For RESOURCE INDIVIDUALS, identify: - Identifier: Unique descriptor (e.g., "NSPE_CodeOfEthics_2023") - Resource class: Which resource type it represents (existing or new) - Document title: Official name or description - Created by: Organization or authority that created it - Created at: When it was created (if mentioned) - Version: Edition or version information - URL or location: Where to find it (if mentioned) - Used by: Who used this resource in the case - Used in context: How this resource was applied - Case involvement: How this resource affected decisions CASE TEXT FROM discussion SECTION: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Respond with a JSON structure. Here's an EXAMPLE: EXAMPLE (if the case mentions "Engineer A consulted the NSPE Code of Ethics and the state's engineering regulations"): { "new_resource_classes": [ { "label": "State Engineering Regulations", "definition": "Legal requirements and regulations governing engineering practice at the state level", "resource_type": "regulatory_document", "accessibility": ["public", "official"], "authority_source": "State Engineering Board", "typical_usage": "Legal compliance and professional practice guidance", "domain_context": "Engineering", "examples_from_case": ["State engineering regulations consulted by Engineer A"], "source_text": "Engineer A consulted the state's engineering regulations", "confidence": 0.85, "rationale": "Specific type of regulatory resource not in existing ontology" } ], "resource_individuals": [ { "identifier": "NSPE_CodeOfEthics_Current", "resource_class": "Professional Ethics Code", "document_title": "NSPE Code of Ethics", "created_by": "National Society of Professional Engineers", "created_at": "Current version", "version": "Current", "used_by": "Engineer A", "used_in_context": "Consulted for ethical guidance on conflict of interest", "case_involvement": "Provided framework for ethical decision-making", "source_text": "Engineer A consulted the NSPE Code of Ethics", "is_existing_class": true, "confidence": 0.95 }, { "identifier": "State_Engineering_Regulations_Current", "resource_class": "State Engineering Regulations", "document_title": "State Engineering Practice Act and Regulations", "created_by": "State Engineering Board", "used_by": "Engineer A", "used_in_context": "Referenced for legal requirements", "case_involvement": "Defined legal obligations for professional practice", "source_text": "Engineer A referenced the State Engineering Practice Act and Regulations", "is_existing_class": false, "confidence": 0.9 } ] } EXTRACTION RULES: 1. For EVERY new resource class you identify, you MUST provide at least one corresponding resource individual 2. Resource individuals MUST have identifiable titles or descriptions 3. If you cannot identify a specific instance, do not create the resource class 4. Focus on resources that directly influence decision-making in the case 5. Each resource individual should clearly demonstrate why its resource class is needed Focus on resources that: 1. Are explicitly mentioned or referenced in the case 2. Guide professional decisions or actions 3. Provide standards, requirements, or frameworks 4. Serve as knowledge sources for the professionals involved
Saved: 2026-01-05 18:50