Step 2: Normative Requirements Pass (Discussion Section)

Extract principles, obligations, constraints, and capabilities from the discussion section

Use of Artificial Intelligence in Engineering Practice
Step 2 of 5

Discussion Section

Section Content:
Discussion:
The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally.
Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category.
The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly.
However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research.
The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances.
Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools.
The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system.
The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI.
The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs.
None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI].
The BER considers the change to CAD to merely represent a drafting enhancement.
The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology.
As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement.
That translates to a scenario for potential liability.
In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control.
The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution.
Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more?
Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks.
Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception.
Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies.
Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here.
Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff.
Per Code section II.1.c, confidential information can only be shared with prior consent of the Client.
While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions.
When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain.
The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain.
Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority.
Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations.
Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical.
In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market.
Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions.
When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted.
Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously.
BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work.
The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience.
For instance, never designed a highway before?
No problem.
Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software.
In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case.
They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances.
Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment.
Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing.
While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations.
To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent.
The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight.
Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement.
Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence.
The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”.
Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns.
AI-generated technical work requires at least the same level of scrutiny as human-created work.
Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a.
NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion.
Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority.
Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate.
The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement.
Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints.
Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood.
Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted.
These are steps that, in this case, Engineer A chose not to follow.
While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency.
BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others.
AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product.
Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust.
Principles Extraction
LLM Prompt
DUAL PRINCIPLE EXTRACTION - Ethical Principles Analysis THEORETICAL CONTEXT (Chapter 2.2.2): - Principles are ABSTRACT ethical foundations requiring extensional definition through cases - They function like constitutional principles - open-textured and requiring interpretation - Principles mediate moral ideals into concrete reality through context-specific application - They cannot be applied deductively but require balancing and interpretation EXISTING PRINCIPLE CLASSES IN ONTOLOGY: - Competence Principle: Balances professional autonomy with recognition of limitations and need for collaboration. - Confidentiality Principle: Must be balanced against public safety obligations and legal disclosure requirements. - Domain-Specific Principle: Examples: Environmental Stewardship (engineering), Patient Autonomy (medicine), Academic Freedom (education). These address domain-specific ethical ch - Environmental Stewardship Principle: Bridges engineering practice with broader environmental and societal responsibilities. - Ethical Principle: Legacy synonym for Principle. Use :Principle for new concepts. - Fundamental Ethical Principle: Examples: Public Welfare Paramount, Respect for Persons, Justice, Beneficence. These require extensive interpretation and balancing in specific contex - Integrity Principle: Central to professional identity formation and trust maintenance in professional relationships. - Principle: An information content entity representing ethical values and guidelines for conduct. This is the P component of the formal specification D=(R,P,O,S,R - Principle Argument: An argument based on fundamental ethical principles or values. - Principle Refinement: Capability to evolve principles as inconsistencies are resolved and new cases are added (GenEth system, Anderson & Anderson 2018) - Professional Virtue Principle: Examples: Integrity, Competence, Honesty, Professional Courage, Accountability. These shape professional character and decision-making. - Public Welfare Principle: Requires contextual interpretation to balance against client confidentiality and employer loyalty when public safety is at risk. - Relational Principle: Examples: Confidentiality, Loyalty, Fairness, Transparency, Respect for Autonomy. These govern interpersonal professional dynamics. - Transparency Principle: Essential for maintaining trust and enabling stakeholder participation in professional decisions. === TASK === From the following case text (discussion section), extract information at TWO levels: LEVEL 1 - NEW PRINCIPLE CLASSES: Identify ethical principles that appear to be NEW types not covered by existing classes above. Look for: - Fundamental ethical values being invoked - Abstract moral ideals guiding decisions - Constitutional-like principles requiring interpretation - Values that transcend specific rules or obligations For each NEW principle class, provide: - label: Clear principle name (e.g., "Environmental Stewardship", "Professional Autonomy") - definition: What moral ideal this principle represents - abstract_nature: The abstract ethical foundation (justice, welfare, autonomy, etc.) - extensional_examples: Concrete cases/situations where this principle applies - value_basis: Core moral value underlying the principle - application_context: Professional domains or situations where relevant - operationalization: How this abstract principle becomes concrete in practice - balancing_requirements: What other principles it typically must be balanced against - examples_from_case: How this principle appears in the case text LEVEL 2 - PRINCIPLE INDIVIDUALS: Identify specific instances where principles are invoked or applied. For each instance: - identifier: Unique identifier for this principle instance (e.g., "PublicSafety_Case8_Discussion") - principle_class: Which principle class it instantiates (use existing classes when possible) - concrete_expression: EXACT text showing how the principle is expressed - invoked_by: Who invokes or appeals to this principle - applied_to: What decision/situation/dilemma it applies to - interpretation: How the principle is interpreted in this specific context - balancing_with: Other principles that must be balanced against it - tension_resolution: How conflicts between principles are resolved - case_relevance: Why this principle matters in this specific case IMPORTANT: - Focus on ABSTRACT ethical foundations, not specific rules or procedures - Principles are broader than obligations - they generate obligations in context - Use EXACT quotes from case text where principles are expressed - Distinguish between the abstract principle CLASS and its concrete APPLICATION CASE TEXT: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Respond with valid JSON in this format: { "new_principle_classes": [ { "label": "Sustainable Development", "definition": "Principle that engineering solutions must balance current needs with long-term environmental and societal impacts", "abstract_nature": "Intergenerational justice and environmental stewardship", "extensional_examples": ["Green building design", "Renewable energy projects", "Resource conservation"], "value_basis": "Responsibility to future generations", "application_context": ["Infrastructure projects", "Environmental engineering", "Urban planning"], "operationalization": "Through environmental impact assessments, lifecycle analysis, sustainable design criteria", "balancing_requirements": ["Economic feasibility", "Immediate safety needs", "Client requirements"], "examples_from_case": ["Engineer considered long-term environmental impacts", "balanced immediate needs with sustainability"], "source_text": "Engineer considered long-term environmental impacts and balanced immediate needs with sustainability" } ], "principle_individuals": [ { "identifier": "PublicSafety_Case8_Facts", "principle_class": "Public Safety", "concrete_expression": "the safety of the public must be held paramount", "invoked_by": ["Engineer L"], "applied_to": ["stormwater management system design"], "interpretation": "Safety considerations override cost savings in drainage design", "source_text": "the safety of the public must be held paramount", "balancing_with": ["Cost Efficiency", "Client Interests"], "tension_resolution": "Safety takes precedence even if it increases project costs", "case_relevance": "Critical for evaluating adequacy of proposed drainage solution" } ] }
Saved: 2026-01-05 18:53
LLM Response
```json
{
    "new_principle_classes": [
        {
            "label": "Technological Prudence",
            "definition": "Principle that new technologies must be adopted with careful consideration of their limitations and appropriate professional oversight",
            "abstract_nature": "Professional judgment and responsible innovation",
            "extensional_examples": ["AI tool adoption", "CADD system implementation", "Automated design software use"],
            "value_basis": "Professional judgment cannot be replaced by technology",
            "application_context": ["Engineering practice modernization", "Technology adoption", "AI integration"],
            "operationalization": "Through comprehensive verification processes, maintaining direction and control, understanding tool limitations",
            "balancing_requirements": ["Innovation adoption", "Efficiency gains", "Traditional practice methods"],
            "examples_from_case": ["technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment"],
            "source_text": "technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment"
        },
        {
            "label": "Responsible Charge",
            "definition": "Principle requiring active engagement and personal involvement in engineering decisions from conception to completion",
            "abstract_nature": "Professional accountability and direct oversight",
            "extensional_examples": ["Project supervision", "Design review", "Document approval", "AI output verification"],
            "value_basis": "Personal professional accountability",
            "application_context": ["Project management", "Design oversight", "AI-assisted work", "Team supervision"],
            "operationalization": "Being actively engaged in the engineering process, personally making engineering decisions, providing supervisory direction and control",
            "balancing_requirements": ["Delegation needs", "Efficiency demands", "Technology assistance"],
            "examples_from_case": ["being actively engaged in the engineering process, from conception to completion"],
            "source_text": "being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer"
        }
    ],
    "principle_individuals": [
        {
            "identifier": "PublicWelfare_AICase_Canon",
            "principle_class": "Public Welfare Principle",
            "concrete_expression": "hold paramount the safety, health, and welfare of the public",
            "invoked_by": ["BER"],
            "applied_to": ["AI-generated design documents with safety omissions"],
            "interpretation": "AI errors that could lead to safety hazards violate the paramount duty to public welfare",
            "source_text": "The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, 'hold paramount the safety, health, and welfare of the public'",
            "balancing_with": ["Efficiency through AI use", "Time savings"],
            "tension_resolution": "Public safety overrides efficiency gains from AI tools",
            "case_relevance": "Central to evaluating whether Engineer A's limited review of AI output was ethical"
        },
        {
            "identifier": "Competence_AICase_Canon",
            "principle_class": "Competence Principle",
            "concrete_expression": "perform services only in areas of their competence",
            "invoked_by": ["BER"],
            "applied_to": ["Engineer A's use of AI for environmental engineering work"],
            "interpretation": "Competence extends to proper use and oversight of AI tools, not just technical domain knowledge",
            "source_text": "Fundamental Canon I.2 states that engineers 'perform services only in areas of their competence'",
            "balancing_with": ["Innovation adoption", "Efficiency needs"],
            "tension_resolution": "Competence requirement satisfied when engineer has expertise and properly reviews AI output",
            "case_relevance": "Establishes that Engineer A had domain competence but may have lacked competence in AI tool oversight"
        },
        {
            "identifier": "Transparency_AICase_Attribution",
            "principle_class": "Transparency Principle",
            "concrete_expression": "Engineers shall give credit for engineering work to those to whom credit is due",
            "invoked_by": ["BER"],
            "applied_to": ["AI's substantial contribution to report and design documents"],
            "interpretation": "AI contributions should be disclosed even though AI is not human",
            "source_text": "AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, '[e]ngineers shall give credit for engineering work to those to whom credit is due'",
            "balancing_with": ["Professional autonomy", "Client expectations"],
            "tension_resolution": "Proactive disclosure strengthens trust despite no universal guidelines",
            "case_relevance": "Critical for maintaining trust when AI substantially contributes to work product"
        },
        {
            "identifier": "Confidentiality_AICase_Upload",
            "principle_class": "Confidentiality Principle",
            "concrete_expression": "confidential information can only be shared with prior consent of the Client",
            "invoked_by": ["BER"],
            "applied_to": ["Engineer A uploading client data to open-source AI interface"],
            "interpretation": "Using client data in public AI systems violates confidentiality without explicit permission",
            "source_text": "When Engineer A uploaded Client W's information into the AI open-source interface, this was tantamount to placing the Client's private information in the public domain",
            "balancing_with": ["Efficiency of AI use", "Technical capabilities"],
            "tension_resolution": "Confidentiality cannot be compromised for technological convenience",
            "case_relevance": "Identifies a clear ethical violation in Engineer A's use of AI"
        },
        {
            "identifier": "ProfessionalJudgment_AICase_Substitute",
            "principle_class": "Professional Virtue Principle",
            "concrete_expression": "technology must not replace or be used as a substitute for engineering judgement",
            "invoked_by": ["BER"],
            "applied_to": ["Engineer A's reliance on AI-generated plans without proper oversight"],
            "interpretation": "Professional judgment must guide and validate all technology outputs",
            "source_text": "the BER stated that technology must not replace or be used as a substitute for engineering judgement",
            "balancing_with": ["Efficiency gains", "Innovation adoption"],
            "tension_resolution": "Technology enhances but cannot replace professional judgment",
            "case_relevance": "Core principle violated when Engineer A conducted only high-level review"
        },
        {
            "identifier": "ResponsibleCharge_AICase_Violation",
            "principle_class": "Responsible Charge",
            "concrete_expression": "being actively engaged in the engineering process, from conception to completion",
            "invoked_by": ["BER", "NSPE Position Statement"],
            "applied_to": ["Engineer A's oversight of AI-generated engineering plans"],
            "interpretation": "Reviewing AI output without involvement in design process violates responsible charge",
            "source_text": "Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge",
            "balancing_with": ["Time efficiency", "AI capabilities"],
            "tension_resolution": "Active engagement required regardless of AI assistance",
            "case_relevance": "Engineer A failed to maintain responsible charge by not being actively engaged"
        },
        {
            "identifier": "Integrity_AICase_Deception",
            "principle_class": "Integrity Principle",
            "concrete_expression": "avoid deceptive acts",
            "invoked_by": ["BER"],
            "applied_to": ["Engineer A's use of AI for report writing"],
            "interpretation": "Using AI assistance is not inherently deceptive if work is accurate and properly reviewed",
            "source_text": "Fundamental Canon I.5 requires an Engineer to 'avoid deceptive acts,' which was not violated here",
            "balancing_with": ["Transparency requirements", "Attribution needs"],
            "tension_resolution": "No deception if qualifications not misrepresented and content accurate",
            "case_relevance": "Establishes that AI use itself is not deceptive"
        }
    ]
}
```
Obligations Extraction
LLM Prompt
You are an expert in professional ethics analyzing a case for obligations (professional duties and requirements). Based on the literature: - Obligations are CONCRETE PROFESSIONAL DUTIES derived from abstract principles (Hallamaa & Kalliokoski 2022) - They specify what professionals MUST, SHOULD, or MUST NOT do (Dennis et al. 2016) - Obligations have deontic force and are enforceable (Wooldridge & Jennings 1995) - They operationalize principles in specific contexts (Kong et al. 2020) Your task is to: 1. Identify NEW OBLIGATION CLASSES not in the existing ontology 2. Extract SPECIFIC OBLIGATION INDIVIDUALS from the case EXISTING OBLIGATIONS IN ONTOLOGY (check if your identified obligations match these before creating new classes): - Attribution Obligation: Duty to properly cite and acknowledge all sources, tools, and methods used in creating professional deliverables - Collegial Obligation: Duties toward professional peers including respect, fairness, and credit for work - Competence Obligation: Requirement to perform services only in areas of competence (NSPE II.2) - Conditional Obligation: Obligations that apply only when specific conditions are met - Confidentiality Obligation: Duty to protect confidential information of clients or employers (NSPE III.4) - Defeasible Obligation: Obligations that admit justified exceptions under specified conditions (Ganascia 2007) - Disclosure Obligation: Requirement to inform stakeholders about conflicts, limitations, or risks (NSPE II.3.a) - Ethical Obligation: Obligations arising from ethical principles beyond legal requirements - Legal Obligation: Obligations arising from legal requirements and regulations - Mandatory Obligation: Obligations that MUST be fulfilled (deontic force: mandatory) - Obligation: An information content entity expressing required actions or behaviors in professional contexts. This is the O component of the formal specification D=(R,P,O,S,Rs,A,E,Ca,Cs). - Prima Facie Obligation: Obligations that hold at first appearance but may be overridden by stronger duties (Anderson & Anderson) - Professional Obligation: A duty or responsibility arising from professional role or standards - Quality Assurance Obligation: Requirement to implement appropriate review and verification processes for all professional deliverables before submission - Reporting Obligation: Duty to report violations or unsafe conditions to appropriate authorities - Responsible Charge Obligation: Duty to maintain active engagement and personal decision-making authority throughout engineering process from conception to completion - Safety Obligation: Duty to hold paramount the safety, health, and welfare of the public (NSPE I.1) - Technology Competence Obligation: Requirement to understand and verify the capabilities, limitations, and risks of technological tools used in professional practice - Technology Verification Obligation: Requirement to thoroughly verify and validate outputs from technological tools including AI before incorporating them into professional work Analyze this discussion section: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Extract obligations following this JSON structure: { "new_obligation_classes": [ { "label": "Clear, specific obligation class name", "definition": "What this type of obligation requires professionals to do", "derived_from_principle": "Which principle this operationalizes (e.g., 'Public Safety', 'Honesty')", "duty_type": "professional|legal|ethical|societal", "enforcement_mechanism": "How this obligation is typically enforced", "violation_consequences": "What happens when this obligation is violated", "examples_from_case": ["Example 1 from the case", "Example 2"], "source_text": "EXACT text snippet from case where this obligation is identified (max 200 characters)", "confidence": 0.0-1.0, "reasoning": "Why this is a new class not in existing ontology" } ], "obligation_individuals": [ { "identifier": "Unique name for this specific obligation instance", "obligation_class": "Name of the obligation class (new or existing)", "obligated_party": "Who has this obligation (e.g., 'Engineer L', 'All Licensed PEs')", "obligation_statement": "The specific duty statement (e.g., 'Report safety risks to authorities')", "derived_from": "Source principle or law (e.g., 'NSPE Code', 'State Law')", "enforcement_context": "How enforced in this case", "temporal_scope": "When this obligation applies", "compliance_status": "met|unmet|unclear|pending", "case_context": "How this obligation manifests in the specific case", "source_text": "EXACT text snippet from case where this obligation is mentioned (max 200 characters)", "is_existing_class": true/false, "confidence": 0.0-1.0 } ] } Focus on: 1. NEW obligation types that represent novel professional duties 2. Specific obligation instances showing how duties apply in this case 3. The relationship between obligations and the principles they operationalize 4. Enforcement mechanisms and compliance status Return ONLY the JSON structure, no additional text.
Saved: 2026-01-05 18:54
LLM Response
```json
{
  "new_obligation_classes": [
    {
      "label": "AI Disclosure Obligation",
      "definition": "Requirement to inform clients and stakeholders when AI substantially contributes to professional work products",
      "derived_from_principle": "Transparency and Honesty",
      "duty_type": "professional",
      "enforcement_mechanism": "Professional review boards, client complaints, peer review",
      "violation_consequences": "Loss of client trust, professional sanctions, potential liability",
      "examples_from_case": ["Engineer A failed to disclose AI use to Client W", "Client W questioned inconsistencies in the report"],
      "source_text": "proactive disclosure could have prevented misunderstandings and strengthened trust",
      "confidence": 0.9,
      "reasoning": "Distinct from general Disclosure Obligation - specifically addresses AI transparency requirements not covered in existing ontology"
    },
    {
      "label": "AI Output Verification Obligation",
      "definition": "Duty to conduct comprehensive verification of AI-generated technical work at least to the same level as human-created work",
      "derived_from_principle": "Professional Competence and Due Diligence",
      "duty_type": "professional",
      "enforcement_mechanism": "Professional standards enforcement, regulatory compliance checks",
      "violation_consequences": "Safety hazards, regulatory noncompliance, professional liability",
      "examples_from_case": ["Engineer A only conducted high-level review", "Failed to detect misaligned dimensions and omitted safety features"],
      "source_text": "AI-generated technical work requires at least the same level of scrutiny as human-created work",
      "confidence": 0.85,
      "reasoning": "More specific than Technology Verification Obligation - focuses on comprehensive scrutiny equal to human work standards"
    }
  ],
  "obligation_individuals": [
    {
      "identifier": "EngineerA_Competence_AI_Tools",
      "obligation_class": "Competence Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Perform services only in areas of competence including proper use of AI tools",
      "derived_from": "NSPE Code II.2.a",
      "enforcement_context": "BER review of professional conduct",
      "temporal_scope": "During groundwater monitoring project",
      "compliance_status": "met",
      "case_context": "Engineer A was competent in environmental engineering and performed thorough review",
      "source_text": "Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "EngineerA_Confidentiality_ClientW",
      "obligation_class": "Confidentiality Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Must not share Client W's confidential information without prior consent",
      "derived_from": "NSPE Code II.1.c",
      "enforcement_context": "BER ethical review",
      "temporal_scope": "When using AI interface",
      "compliance_status": "unmet",
      "case_context": "Engineer A uploaded client information to open-source AI interface without permission",
      "source_text": "Engineer A uploaded Client W's information into the AI open-source interface...tantamount to placing...in the public domain",
      "is_existing_class": true,
      "confidence": 1.0
    },
    {
      "identifier": "EngineerA_Attribution_AI_Sources",
      "obligation_class": "Attribution Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Give credit for engineering work including AI contributions and cite technical authorities",
      "derived_from": "NSPE Code III.9",
      "enforcement_context": "Professional standards review",
      "temporal_scope": "In final report submission",
      "compliance_status": "unmet",
      "case_context": "Failed to include citations and acknowledge AI contribution",
      "source_text": "facts do not indicate the AI-generated report included citations of pertinent documents of technical authority",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "EngineerA_ResponsibleCharge_Design",
      "obligation_class": "Responsible Charge Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Maintain active engagement and personal decision-making from conception to completion",
      "derived_from": "NSPE Position Statement No. 10-1778",
      "enforcement_context": "Licensure law enforcement",
      "temporal_scope": "Throughout design document preparation",
      "compliance_status": "unmet",
      "case_context": "Only conducted high-level review without proper oversight",
      "source_text": "Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a",
      "is_existing_class": true,
      "confidence": 1.0
    },
    {
      "identifier": "EngineerA_Safety_PublicWelfare",
      "obligation_class": "Safety Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Hold paramount the safety, health, and welfare of the public",
      "derived_from": "NSPE Fundamental Canon I.1",
      "enforcement_context": "Professional ethics enforcement",
      "temporal_scope": "In all engineering work",
      "compliance_status": "unmet",
      "case_context": "Errors in AI-generated design could have led to safety hazards",
      "source_text": "errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "EngineerA_AI_Disclosure_ClientW",
      "obligation_class": "AI Disclosure Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Inform Client W when AI substantially contributes to work product",
      "derived_from": "Best practices and transparency principles",
      "enforcement_context": "Professional ethics and client relations",
      "temporal_scope": "Before or during work product delivery",
      "compliance_status": "unmet",
      "case_context": "Failed to disclose AI use, leading to client questions about inconsistencies",
      "source_text": "best practices suggest informing clients when AI substantially contributes to a work product",
      "is_existing_class": false,
      "confidence": 0.85
    },
    {
      "identifier": "EngineerA_AI_Verification_Designs",
      "obligation_class": "AI Output Verification Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Conduct comprehensive verification of AI-generated designs equal to human-created work standards",
      "derived_from": "Professional diligence standards",
      "enforcement_context": "Professional standards enforcement",
      "temporal_scope": "Before signing and sealing documents",
      "compliance_status": "unmet",
      "case_context": "Only conducted high-level review, missing critical errors",
      "source_text": "By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating",
      "is_existing_class": false,
      "confidence": 0.9
    },
    {
      "identifier": "EngineerA_QualityAssurance_Review",
      "obligation_class": "Quality Assurance Obligation",
      "obligated_party": "Engineer A",
      "obligation_statement": "Provide experienced-based quality assurance review of all work products",
      "derived_from": "Professional standards of practice",
      "enforcement_context": "Professional review and licensure requirements",
      "temporal_scope": "Before finalizing engineering documents",
      "compliance_status": "unmet",
      "case_context": "Failed to provide adequate QA review of AI-generated content",
      "source_text": "Engineer A, as the engineer in Responsible Charge...is required to provide an experienced-based quality assurance review",
      "is_existing_class": true,
      "confidence": 0.95
    }
  ]
}
```
Constraints Extraction
LLM Prompt
You are an expert in professional ethics analyzing a case for constraints (boundaries, limitations, and restrictions). Based on the literature: - Constraints are INVIOLABLE BOUNDARIES that limit acceptable actions (Dennis et al. 2016) - They differ from obligations by being restrictions rather than requirements - Constraints can be legal, physical, resource-based, or procedural - They define the space within which ethical decisions must be made Your task is to: 1. Identify NEW CONSTRAINT CLASSES not in the existing ontology 2. Extract SPECIFIC CONSTRAINT INDIVIDUALS from the case EXISTING CONSTRAINTS IN ONTOLOGY (check if your identified constraints match these before creating new classes): - Attribution Constraint: Requirements and limitations regarding the disclosure and citation of tools, methods, or sources used in professional work products - Competence Constraint: Boundaries defined by agent capabilities and technical limitations (Hallamaa & Kalliokoski 2022) - Confidentiality Constraint: Boundaries on information disclosure and privacy (Dennis et al. 2016) - Constraint: An information content entity expressing limitations or restrictions on professional actions or decisions. This is the Cs component of the formal specification D=(R,P,O,S,Rs,A,E,Ca,Cs). - Defeasible Constraint: Constraints that admit justified exceptions under specified conditions (Ganascia 2007) - Ethical Constraint: Professional ethical boundaries beyond legal requirements (Benzmüller et al. 2020) - Inviolable Constraint: Absolute constraints that cannot be overridden under any circumstances (Dennis et al. 2016) - Jurisdictional Constraint: Boundaries of authority and geographical or organizational scope (Dennis et al. 2016) - Legal Constraint: Constraints arising from legal requirements and statutes that establish inviolable boundaries (Kroll 2020, Stenseke 2024) - Priority Constraint: Constraints establishing precedence when multiple requirements conflict (Scheutz & Malle 2014) - Procedural Constraint: Requirements for following specific processes or protocols (Furbach et al. 2014) - Regulatory Constraint: Constraints from regulatory bodies and professional standards organizations (Taddeo et al. 2024) - Resource Constraint: Limitations on available time, budget, materials, or human resources (Ganascia 2007) - Safety Constraint: Constraints ensuring safety requirements are met (Arkin 2008) - Technology Substitution Constraint: Boundaries preventing technology from replacing professional judgment and human oversight in critical decision-making Analyze this discussion section: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Extract constraints following this JSON structure: { "new_constraint_classes": [ { "label": "Clear, specific constraint class name", "definition": "What this type of constraint limits or restricts", "constraint_type": "legal|physical|resource|temporal|procedural", "flexibility": "hard|soft|negotiable", "violation_impact": "What happens if this constraint is violated", "mitigation_possible": "Whether and how this constraint can be mitigated", "examples_from_case": ["Example 1 from the case", "Example 2"], "source_text": "EXACT text snippet from case where this constraint is identified (max 200 characters)", "confidence": 0.0-1.0, "reasoning": "Why this is a new class not in existing ontology" } ], "constraint_individuals": [ { "identifier": "Unique name for this specific constraint instance", "constraint_class": "Name of the constraint class (new or existing)", "constrained_entity": "What or who is constrained (e.g., 'Engineer L', 'Project')", "constraint_statement": "The specific limitation (e.g., 'Cannot exceed budget of $X')", "source": "Origin of constraint (e.g., 'Client budget', 'Environmental law')", "enforcement_mechanism": "How this constraint is enforced", "temporal_scope": "When this constraint applies", "severity": "critical|major|minor", "case_context": "How this constraint manifests in the specific case", "source_text": "EXACT text snippet from case where this constraint is mentioned (max 200 characters)", "is_existing_class": true/false, "confidence": 0.0-1.0 } ] } Focus on: 1. NEW constraint types that represent novel limitations or boundaries 2. Specific constraint instances showing how limitations apply in this case 3. The difference between constraints (boundaries) and obligations (duties) 4. Impact and severity of constraints on decision-making Return ONLY the JSON structure, no additional text.
Saved: 2026-01-05 18:55
LLM Response
```json
{
  "new_constraint_classes": [
    {
      "label": "Responsible Charge Constraint",
      "definition": "Limitations requiring engineers to maintain active engagement and personal decision-making authority throughout the engineering process from conception to completion",
      "constraint_type": "procedural",
      "flexibility": "hard",
      "violation_impact": "Violation of licensure law and loss of professional accountability",
      "mitigation_possible": "Cannot be mitigated - requires personal involvement throughout process",
      "examples_from_case": ["Engineer A did not maintain responsible charge", "Reviewing drawings after preparation without involvement doesn't satisfy Responsible Charge"],
      "source_text": "Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a",
      "confidence": 0.9,
      "reasoning": "Distinct from Competence Constraint as it requires continuous active engagement rather than just capability; goes beyond Procedural Constraint by mandating personal decision-making authority"
    },
    {
      "label": "Public Domain Disclosure Constraint",
      "definition": "Boundaries preventing placement of client confidential information into public or open-source domains without explicit consent",
      "constraint_type": "legal",
      "flexibility": "hard",
      "violation_impact": "Breach of confidentiality and potential legal liability",
      "mitigation_possible": "Only through prior client consent",
      "examples_from_case": ["Uploading to AI open-source interface tantamount to placing in public domain", "Engineer A did not obtain permission from Client W"],
      "source_text": "When Engineer A uploaded Client W's information into the AI open-source interface, this was tantamount to placing the Client's private information in the public domain",
      "confidence": 0.85,
      "reasoning": "More specific than general Confidentiality Constraint - addresses modern challenge of open-source/public platforms rather than traditional disclosure"
    }
  ],
  "constraint_individuals": [
    {
      "identifier": "AI_Tool_Verification_Constraint",
      "constraint_class": "Technology Substitution Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Cannot rely on AI-generated technical work without comprehensive verification process equivalent to human-created work review",
      "source": "NSPE Code section II.2.b and BER Case 98-3",
      "enforcement_mechanism": "Professional licensure requirements and ethical code enforcement",
      "temporal_scope": "Whenever AI tools are used for engineering work",
      "severity": "critical",
      "case_context": "Engineer A only conducted high-level review of AI-generated design documents, missing critical errors",
      "source_text": "AI-generated technical work requires at least the same level of scrutiny as human-created work",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "Client_W_Confidentiality_Boundary",
      "constraint_class": "Public Domain Disclosure Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Cannot upload Client W's confidential information to open-source AI interfaces without prior consent",
      "source": "Code section II.1.c",
      "enforcement_mechanism": "Professional ethics code and potential legal action",
      "temporal_scope": "Throughout project engagement with Client W",
      "severity": "critical",
      "case_context": "Engineer A uploaded client data to AI interface without permission",
      "source_text": "confidential information can only be shared with prior consent of the Client",
      "is_existing_class": false,
      "confidence": 0.9
    },
    {
      "identifier": "AI_Attribution_Requirement",
      "constraint_class": "Attribution Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Must disclose when AI substantially contributes to work product and include appropriate citations",
      "source": "Code section III.9",
      "enforcement_mechanism": "Professional ethics review and client trust requirements",
      "temporal_scope": "When submitting AI-assisted work products",
      "severity": "major",
      "case_context": "Engineer A failed to disclose AI use or provide citations in report",
      "source_text": "Engineers shall give credit for engineering work to those to whom credit is due",
      "is_existing_class": true,
      "confidence": 0.85
    },
    {
      "identifier": "Safety_Feature_Inclusion_Boundary",
      "constraint_class": "Safety Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Cannot omit key safety features necessary for regulatory compliance in design documents",
      "source": "Fundamental Canon I.1 and local regulations",
      "enforcement_mechanism": "Regulatory compliance checks and professional review",
      "temporal_scope": "During design document preparation and review",
      "severity": "critical",
      "case_context": "AI-generated plans omitted key safety features required for compliance",
      "source_text": "key safety features (including those necessary for compliance with local regulations) were omitted",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "Engineering_Judgment_Preservation",
      "constraint_class": "Technology Substitution Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Technology must never replace or substitute for engineering judgment",
      "source": "BER Case 98-3",
      "enforcement_mechanism": "Professional standards and licensure requirements",
      "temporal_scope": "Continuous throughout engineering practice",
      "severity": "critical",
      "case_context": "Engineer A relied on AI without adequate engineering judgment application",
      "source_text": "technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment",
      "is_existing_class": true,
      "confidence": 1.0
    },
    {
      "identifier": "Responsible_Charge_Maintenance",
      "constraint_class": "Responsible Charge Constraint",
      "constrained_entity": "Engineer A",
      "constraint_statement": "Must be actively engaged in engineering process from conception to completion with personal decision-making authority",
      "source": "NSPE Position Statement No. 10-1778",
      "enforcement_mechanism": "Licensure law and professional standards",
      "temporal_scope": "Throughout entire project lifecycle",
      "severity": "critical",
      "case_context": "Engineer A failed to maintain responsible charge when relying on AI without proper oversight",
      "source_text": "being actively engaged in the engineering process, from conception to completion",
      "is_existing_class": false,
      "confidence": 0.9
    }
  ]
}
```
Capabilities Extraction
LLM Prompt
You are an expert in professional ethics analyzing a case for capabilities (competencies and skills required for professional practice). Based on the literature: - Capabilities are COMPETENCIES that enable norm compliance (Hallamaa & Kalliokoski 2022) - They represent the skills needed to fulfill professional obligations (Dennis et al. 2016) - Capabilities include technical, ethical, communicative, and analytical competencies - They constitute "norm competence" - the ability to act ethically (Kong et al. 2020) Your task is to: 1. Identify NEW CAPABILITY CLASSES not in the existing ontology 2. Extract SPECIFIC CAPABILITY INDIVIDUALS from the case EXISTING CAPABILITIES IN ONTOLOGY (check if your identified capabilities match these before creating new classes): - AI Tool Evaluation Capability: The competency to assess, validate, and appropriately use AI-generated content in professional work - Capability: A realizable entity that can be realized by specific types of actions or processes in professional contexts. This is the Ca component of the formal specification D=(R,P,O,S,Rs,A,E,Ca,Cs). - Capability Type: Meta-class for specific capability types recognized by the ProEthica system - Data Privacy Protection Capability: The competency to safeguard confidential client information when using digital tools and platforms - Justification Capability: Capability to trace reasoning paths connecting decisions to authoritative sources (McLaren 2003, Anderson & Anderson 2018) - Regulatory Compliance Capability: The competency to ensure designs and documents meet local regulations and safety standards - Responsible Charge Capability: The competency to maintain active engagement and supervisory control from conception to completion of engineering work - Technical Capability: A specialized technical skill or competency in a professional domain - Technical Writing Capability: The competency to produce clear, accurate, and professional technical documentation and reports - Technology Integration Capability: The competency to appropriately integrate new technologies while maintaining professional judgment Analyze this discussion section: The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies in engineering work, so long as the technologies are used in such a way that the engineering is done professionally. Artificial intelligence (AI) language processing software and AI-assisted drafting tools are in this category. The AI issues examined in this case are characteristic of engineering practice and are discussed and analyzed accordingly. However, other ethical considerations may arise when applying AI in different engineering contexts such as engineering education or engineering research. The following discussion does not attempt to address any of the potential legal considerations that may arise in such circumstances. Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer’s use of computer assisted drafting and design tools. The BER was asked if it was ethical for an engineer to sign and seal documents prepared using such a system. The introductory paragraph in that case gives a nice summary of the issue and looks ahead to one of the questions we see in this case – use of AI. The case begins: In recent years, the engineering profession has been ‘revolutionized’ by exponential growth in new and innovative computer technological breakthroughs. None have been more dynamic than the evolution that transformed yesterday's manual design techniques to Computer Aided Design (CAD), thence to Computer Assisted Drafting and Design (CADD) and soon Artificial Intelligence [AI]. The BER considers the change to CAD to merely represent a drafting enhancement. The change to CADD provides the BER with concerns that require assurance that the professional engineer has the requisite background, education and training to be proficient with the dynamics of CADD including the limitations of current technology. As night follows day one can be assured that CADD utilized beyond its ability to serve as a valuable tool has a propensity to be utilized as a crutch or substitute for judgement. That translates to a scenario for potential liability. In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control. The use of AI in engineering practice raises ethical considerations, particularly concerning competency, direction and control, respect for client privacy, and accurate and appropriate attribution. Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks. Because Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents . . . not prepared under their direction and control.” Further, the use of AI to assist with writing does not inherently constitute deception. Engineer A did not misrepresent their qualifications or technical expertise, nor did the AI-generated text contain inaccuracies. Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here. Finally, Engineer A performed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client. While careful review and checking of AI-generated content was consistent for ethical use, this does not end the inquiry into Engineer A’s actions. When Engineer A uploaded Client W’s information into the AI open-source interface, this was tantamount to placing the Client’s private information in the public domain. The facts here do not indicate Engineer A obtained permission from Client W to use the private information in the public domain. Similarly, the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority. Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations. Absent this professional level of care, diligence, and documentation, Engineer A’s use of the AI language processing software would be less than ethical. In addition to using AI to prepare the report, Engineer A also prepared draft design documents with a AI-assisted drafting tool that was new to the market. Engineer A elected to only conduct a high-level review and adjusted certain elements to align with site-specific conditions. When Client W reviewed the design documents, they found misaligned dimensions and key safety features (including those necessary for compliance with local regulations) were omitted. Turning to the omission of key safety features in the AI-generated plans, the BER looked at a similar situation previously. BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read “Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse - no matter your design experience. For instance, never designed a highway before? No problem. Just point to the ‘Highways’ window and click.” The engineer in BER Case 98-3 ordered the CD-ROM and began offering facilities design and construction services despite having no experience in this area or with the software. In its discussion in BER Case 98-3 , the BER reviewed several cases involving engineering competency, and concluded it would be unethical for an engineer to offer facilities design and construction services using a tool like this CD-ROM based on the facts presented in the case. They noted: In closing, the [BER]’s decision should not be understood as a wholesale rejection of the use of computers, CD-ROMs and other technological advances. Rather, it is the [BER]’s position that technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment. Thus, Engineer A’s approach to reviewing AI-generated engineering designs presents greater ethical concerns than the ultimate use of AI for report writing. While AI-assisted drafting can be beneficial, the identified errors suggest insufficient review, which could compromise public welfare and impinge on Engineer A’s ethical and professional obligations. To begin, it is the BER’s view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has “several years of experience” and “strong technical expertise.” But the facts also note Engineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.” By relying on AI-assisted tools without a comprehensive verification process of its output, Engineer A risked violating this requirement. Furthermore, failure to detect misaligned dimensions and omitted safety features further indicates that Engineer A did not exercise sufficient diligence. The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns. AI-generated technical work requires at least the same level of scrutiny as human-created work. Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a. NSPE defines “Responsible Charge” in NSPE Position Statement No. 10-1778 as “being actively engaged in the engineering process, from conception to completion. Engineering decisions must be personally made by the professional engineer or by others over which the professional engineer provides supervisory direction and control authority. Reviewing drawings or documents after preparation without involvement in the design and development process does not satisfy the definition of Responsible Charge.” Engineer A, as the engineer in Responsible Charge of the project, is required to provide an experienced-based quality assurance review, engaging in critical discussions, mentorship, and professional development—elements that AI cannot replicate. The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement. Much like directing an engineering intern to solve a problem, responsible use of AI requires an engineer to outline solution guidelines and constraints. Recommendations from the program or intern should not be blindly accepted, they should be considered and challenged and the resulting outputs should be understood. Only after the engineer in Responsible Charge has satisfied themselves that the proposed solution is in accordance with their own and professional standards should the design/report be accepted. These are steps that, in this case, Engineer A chose not to follow. While Engineer A reviewed the content, the lack of disclosure raises concerns about transparency. BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others. AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI disclosure in engineering work, but best practices suggest informing clients when AI substantially contributes to a work product. Given that Client W identified issues in the engineering design and questioned inconsistencies in the report, proactive disclosure could have prevented misunderstandings and strengthened trust. Extract capabilities following this JSON structure: { "new_capability_classes": [ { "label": "Clear, specific capability class name", "definition": "What competency or skill this capability represents", "capability_type": "technical|ethical|communicative|analytical", "norm_competence_related": "Which professional obligations this capability enables", "skill_level": "basic|intermediate|advanced|expert", "acquisition_method": "How this capability is typically acquired (education, training, experience)", "examples_from_case": ["Example 1 from the case", "Example 2"], "source_text": "EXACT text snippet from case where this capability is identified (max 200 characters)", "confidence": 0.0-1.0, "reasoning": "Why this is a new class not in existing ontology" } ], "capability_individuals": [ { "identifier": "Unique name for this specific capability instance", "capability_class": "Name of the capability class (new or existing)", "possessed_by": "Who has this capability (e.g., 'Engineer L', 'All Licensed PEs')", "capability_statement": "The specific competency (e.g., 'Design stormwater systems')", "demonstrated_through": "How shown in the case (e.g., 'Years of experience', 'Professional license')", "proficiency_level": "basic|intermediate|advanced|expert", "enables_obligations": "Which obligations this capability enables", "temporal_aspect": "When this capability is relevant", "case_context": "How this capability manifests in the specific case", "source_text": "EXACT text snippet from case where this capability is mentioned (max 200 characters)", "is_existing_class": true/false, "confidence": 0.0-1.0 } ] } Focus on: 1. NEW capability types that represent novel competencies 2. Specific capability instances showing professional competencies in this case 3. The relationship between capabilities and norm competence 4. How capabilities enable fulfillment of professional obligations Return ONLY the JSON structure, no additional text.
Saved: 2026-01-05 18:56
LLM Response
```json
{
  "new_capability_classes": [
    {
      "label": "AI Output Verification Capability",
      "definition": "The competency to comprehensively verify, validate, and correct AI-generated technical outputs through systematic review processes",
      "capability_type": "analytical",
      "norm_competence_related": "Enables maintaining direction and control over AI-generated work products per Code section II.2.b",
      "skill_level": "advanced",
      "acquisition_method": "Combination of technical education, AI tool training, and experience with verification methodologies",
      "examples_from_case": ["Engineer A performed thorough review and cross-checked key facts", "Engineer A only conducted high-level review of design documents"],
      "source_text": "Engineer A performed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text",
      "confidence": 0.9,
      "reasoning": "Distinct from general AI Tool Evaluation - specifically focuses on verification and validation of AI outputs rather than tool assessment"
    },
    {
      "label": "Attribution and Citation Capability",
      "definition": "The competency to properly attribute contributions and cite technical authorities in engineering work products",
      "capability_type": "ethical",
      "norm_competence_related": "Enables compliance with Code section III.9 requiring credit for engineering work and recognition of proprietary interests",
      "skill_level": "intermediate",
      "acquisition_method": "Professional ethics training, academic writing education, industry practice",
      "examples_from_case": ["AI-generated report lacked citations of pertinent documents", "Need to acknowledge AI contributions"],
      "source_text": "the facts do not indicate the AI-generated report included citations of pertinent documents of technical authority",
      "confidence": 0.85,
      "reasoning": "Not covered by existing capabilities - specific competency for proper attribution and citation in professional work"
    },
    {
      "label": "Engineering Judgment Preservation Capability",
      "definition": "The competency to maintain independent engineering judgment when using automated tools and not substitute technology for professional decision-making",
      "capability_type": "ethical",
      "norm_competence_related": "Prevents violation of professional standards by ensuring technology doesn't replace engineering judgment",
      "skill_level": "expert",
      "acquisition_method": "Years of engineering experience, professional development, mentorship",
      "examples_from_case": ["Technology must not replace engineering judgment", "AI cannot replicate critical discussions and mentorship"],
      "source_text": "technology has an important place in the practice of engineering, but it must never be a replacement of a substitute for engineering judgment",
      "confidence": 0.95,
      "reasoning": "Distinct from Technology Integration - focuses on preserving judgment rather than integration skills"
    }
  ],
  "capability_individuals": [
    {
      "identifier": "EngineerA_Environmental_Analysis",
      "capability_class": "Technical Capability",
      "possessed_by": "Engineer A",
      "capability_statement": "Analyze groundwater monitoring data and assess contaminant risks",
      "demonstrated_through": "Years of experience as environmental engineer",
      "proficiency_level": "expert",
      "enables_obligations": "Perform services only in areas of competence per Canon I.2",
      "temporal_aspect": "Throughout project duration",
      "case_context": "Engineer A was competent to analyze groundwater monitoring data",
      "source_text": "Engineer A, as an experienced environmental engineer, was competent to analyze groundwater monitoring data and assess contaminant risks",
      "is_existing_class": true,
      "confidence": 1.0
    },
    {
      "identifier": "EngineerA_AI_Verification",
      "capability_class": "AI Output Verification Capability",
      "possessed_by": "Engineer A",
      "capability_statement": "Verify and cross-check AI-generated report content against professional sources",
      "demonstrated_through": "Thorough review process for report writing",
      "proficiency_level": "intermediate",
      "enables_obligations": "Maintain direction and control per Code section II.2.b",
      "temporal_aspect": "During report preparation phase",
      "case_context": "Successfully verified report but failed with design documents",
      "source_text": "Engineer A performed a thorough review, cross-checked key facts against professional sources",
      "is_existing_class": false,
      "confidence": 0.9
    },
    {
      "identifier": "EngineerA_Responsible_Charge",
      "capability_class": "Responsible Charge Capability",
      "possessed_by": "Engineer A",
      "capability_statement": "Maintain active engagement and supervisory control from conception to completion",
      "demonstrated_through": "Required but inadequately demonstrated",
      "proficiency_level": "basic",
      "enables_obligations": "Compliance with licensure law and Code section III.8.a",
      "temporal_aspect": "Throughout entire project lifecycle",
      "case_context": "Failed to maintain responsible charge over AI-generated designs",
      "source_text": "Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "EngineerA_Data_Privacy",
      "capability_class": "Data Privacy Protection Capability",
      "possessed_by": "Engineer A",
      "capability_statement": "Protect client confidential information when using AI tools",
      "demonstrated_through": "Failed to obtain consent before uploading to AI",
      "proficiency_level": "basic",
      "enables_obligations": "Compliance with Code section II.1.c on confidential information",
      "temporal_aspect": "When using AI tools with client data",
      "case_context": "Uploaded client information to open-source AI without permission",
      "source_text": "When Engineer A uploaded Client W's information into the AI open-source interface, this was tantamount to placing the Client's private information",
      "is_existing_class": true,
      "confidence": 0.95
    },
    {
      "identifier": "EngineerA_Attribution",
      "capability_class": "Attribution and Citation Capability",
      "possessed_by": "Engineer A",
      "capability_statement": "Provide proper citations and acknowledge AI contributions",
      "demonstrated_through": "Absence of citations in AI-generated report",
      "proficiency_level": "basic",
      "enables_obligations": "Give credit per Code section III.9",
      "temporal_aspect": "When preparing technical documents",
      "case_context": "Failed to include citations or disclose AI use",
      "source_text": "engineers are required to 'give credit for engineering work to those to whom credit is due'",
      "is_existing_class": false,
      "confidence": 0.85
    },
    {
      "identifier": "BER_Technology_Advocacy",
      "capability_class": "Technology Integration Capability",
      "possessed_by": "Board of Ethical Review",
      "capability_statement": "Advocate for appropriate introduction of new technologies in engineering",
      "demonstrated_through": "35-year history of technology guidance",
      "proficiency_level": "expert",
      "enables_obligations": "Guide professional use of emerging technologies",
      "temporal_aspect": "Ongoing institutional capability",
      "case_context": "BER's historical perspective on CAD, CADD, and AI adoption",
      "source_text": "The Board of Ethical Review (BER) has a long history of open and welcome advocacy for the introduction of new technologies",
      "is_existing_class": true,
      "confidence": 1.0
    }
  ]
}
```