Step 4: Full View

Entities, provisions, decisions, and narrative

Use of Artificial Intelligence in Engineering Practice
Step 4 of 5

344

Entities

9

Provisions

2

Precedents

21

Questions

28

Conclusions

Stalemate

Transformation
Stalemate Competing obligations remain in tension without clear resolution
Full Entity Graph
Loading...
Context: 0 Normative: 0 Temporal: 0 Synthesis: 0
Filter:
Building graph...
Entity Types
Synthesis Reasoning Flow
Shows how NSPE provisions inform questions and conclusions - the board's reasoning chain

The board's deliberative chain: which code provisions informed which ethical questions, and how those questions were resolved. Toggle "Show Entities" to see which entities each provision applies to.

Nodes:
Provision (e.g., I.1.) Question: Board = board-explicit, Impl = implicit, Tens = principle tension, Theo = theoretical, CF = counterfactual Conclusion: Board = board-explicit, Resp = question response, Ext = analytical extension, Synth = principle synthesis Entity (hidden by default)
Edges:
informs answered by applies to
NSPE Code Provisions Referenced
Section I. Fundamental Canons 3 97 entities

Hold paramount the safety, health, and welfare of the public.

Case Excerpts
discussion: "The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns." 95% confidence
Applies To (33)
Role
Engineer A Environmental Engineering Consultant Engineer A must hold paramount public safety when preparing environmental reports affecting public health.
Role
Engineer A AI-Assisted Engineering Practitioner Using AI tools without adequate oversight risks public safety in environmental and infrastructure work.
Role
Engineer A Engineer in Responsible Charge Failing to maintain active engagement in responsible charge directly threatens public safety and welfare.
Role
Engineer A Groundwater Infrastructure Design Engineer Design errors in groundwater infrastructure from AI-assisted tools pose direct risks to public safety.
Principle
Public Welfare Paramount Invoked By Omission Of Safety Features In Design Documents The omission of safety features required by local regulations directly threatened public safety, which I.1 requires engineers to hold paramount.
Principle
Public Welfare Paramount Invoked Regarding AI Design Document Errors Misaligned dimensions and omitted safety features in AI-generated documents created public risk, directly implicating the paramount duty to protect public welfare.
Principle
Diligent Verification of AI-Generated Technical Outputs Violated in Design Phase Failure to thoroughly verify AI-generated design outputs resulted in errors threatening public safety, which I.1 requires engineers to prioritize above all else.
Obligation
Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents This provision directly mandates holding public safety paramount, which is the core duty implicated by omitting safety features in design documents.
Obligation
Regulatory Compliance Verification Obligation Breached By Engineer A In Design Document Submission Failing to verify regulatory compliance including safety requirements directly implicates the obligation to hold public safety paramount.
Obligation
Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents Failure to verify safety-related regulatory compliance in AI-generated design documents directly violates the duty to hold public safety paramount.
Obligation
Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance Substituting AI output for independent engineering judgment risks public safety, directly implicating the paramount safety obligation.
State
Public Safety Risk from Design Omissions Omitting required safety features directly threatens public safety which engineers must hold paramount.
State
Engineer A Public Safety Risk from Design Errors Misaligned dimensions and omitted safety features in submitted documents create direct public safety risks.
State
Engineer A Non-Compliant AI Design Documents Submitting non-compliant design documents with errors endangers public safety and welfare.
State
AI-Generated Design Documents Non-Compliant State Design documents with dimensional errors and safety omissions directly violate the duty to protect public safety.
State
Engineer A Insufficient Responsible Charge Failure to maintain responsible charge over AI outputs risks public safety through undetected errors.
Resource
Local Regulatory Safety Requirements for Groundwater Infrastructure The safety requirements define the mandatory protections for public health that Engineer A must uphold under this canon.
Resource
Open-Source AI Drafting Software The AI software produced design documents missing required safety features, directly implicating the duty to hold public safety paramount.
Action
Conducted Cursory Design Document Review Failing to thoroughly review AI-generated design documents risks public safety by allowing errors to pass into engineering outputs.
Event
AI Design Documents Generated Defective AI-generated design documents directly threaten public safety if used without proper review.
Event
Design Document Defects Discovered Discovered defects in design documents represent a direct risk to public safety and welfare.
Capability
Engineer A Responsible Charge Active Review Capability Instance Responsible charge review directly protects public safety and welfare by ensuring engineering work is sound before delivery.
Capability
Engineer A AI Output Verification Capability Design Documents Instance Deficient verification of AI-generated design documents risks public safety through undetected errors.
Capability
Engineer A Regulatory Compliance Verification Capability Instance Failing to verify regulatory compliance in design documents directly threatens public safety and welfare.
Capability
Engineer A Regulatory Compliance Verification Capability Deficient Omission of key safety-related regulatory requirements from design documents directly endangers public health and safety.
Capability
Engineer A AI Output Verification Capability Deficient Design Documents Failure to detect misaligned dimensions and errors in design documents creates direct public safety risks.
Capability
Engineer A Responsible Charge Active Engagement Capability Deficient Failure to actively engage in responsible charge oversight undermines the protection of public safety and welfare.
Capability
Engineer A Technology As Tool Boundary Judgment Capability Deficient Allowing AI to substitute for independent professional judgment compromises the quality of work and endangers public welfare.
Capability
Engineer A Peer Review Continuity Planning Capability Deficient Absence of peer review arrangements removes a critical quality assurance safeguard protecting public safety.
Capability
Engineer B Peer Review Continuity Planning Capability Instance Mentor-level peer review capability, if exercised, would have maintained quality assurance protections for public safety.
Constraint
Safety Constraint Engineer A AI-Generated Design Document Omissions I.1 directly creates the obligation to hold public safety paramount, prohibiting submission of design documents with omitted safety features.
Constraint
Safety Constraint Engineer A AI Design Omissions I.1 is the foundational provision explicitly referenced in this constraint requiring Engineer A to prioritize public safety above AI-generated outputs.
Constraint
Regulatory Constraint Engineer A Local Safety Requirements Design Documents I.1 underpins the requirement to ensure design documents meet local safety regulations for groundwater infrastructure.

Perform services only in areas of their competence.

Case Excerpts
discussion: "Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in" 92% confidence
Applies To (32)
Role
Engineer A Environmental Engineering Consultant Engineer A must only perform environmental consulting services within areas of demonstrated competence.
Role
Engineer A AI-Assisted Engineering Practitioner Using AI tools without sufficient competence to verify outputs falls outside the bounds of competent practice.
Role
Engineer A Groundwater Infrastructure Design Engineer Engineer A must be competent in groundwater infrastructure design before undertaking such assignments.
Principle
Professional Competence Satisfied for Report Writing But Questioned for AI Tool Verification I.2 requires performing services only within areas of competence, and Engineer A's inability to verify AI tool outputs questions whether that standard was met.
Principle
Competence Assurance Under Novel Tool Adoption Invoked By Engineer A Using a new AI tool without prior experience or established competence directly violates the requirement to perform services only in areas of competence.
Principle
Professional Competence Invoked By Engineer A In AI Tool Selection Selecting AI tools to compensate for absent mentorship without adequate competence in those tools conflicts with the duty to perform only within competent areas.
Principle
Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool Adopting an untested AI drafting tool without ensuring understanding of its limitations directly conflicts with the requirement to work only within areas of competence.
Obligation
Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool This provision requires performing services only within areas of competence, directly relating to the obligation to be competent in the AI tool used.
Obligation
AI-Generated Work Product Competence Verification Obligation Breached By Engineer A In Design Document Review This provision requires competence in the technical fields involved, which includes the ability to critically review AI-generated design documents.
Obligation
AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review The provision requires competence in the services performed, directly relating to the obligation to adequately verify AI-generated report content.
Obligation
AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase This provision requires performing services only in areas of competence, which includes understanding AI tool capabilities and limitations.
Obligation
Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance Performing engineering services competently requires applying independent professional judgment rather than substituting AI output for it.
State
Engineer A Unfamiliar AI Tool Deployment Using a newly released AI tool with no prior experience reflects performing services outside areas of competence.
State
Engineer A AI Drafting Tool Unfamiliarity Using an unfamiliar AI drafting tool without adequate knowledge constitutes practicing outside competence.
State
Engineer A Self-Assessed Technical Writing Limitation Engineer A's recognized limitation in technical writing indicates a competence gap in that service area.
Resource
Open-Source AI Drafting Software Using AI software without sufficient competence to oversee its outputs raises the question of whether Engineer A performed services within their area of competence.
Resource
BER-Case-98-3 This precedent establishes that technology must not replace engineering judgment, directly supporting the competence requirement of this canon.
Action
Chose AI for Report Drafting Using AI tools without sufficient competence in their application violates the requirement to perform services only in areas of competence.
Action
Used AI for Design Document Generation Generating design documents via AI without competence in evaluating its outputs violates the requirement to perform services only in areas of competence.
Event
AI Report Draft Generated Using AI to generate reports in technical areas where the engineer lacks competence raises questions about performing services within one's expertise.
Event
AI Design Documents Generated Generating design documents via AI without sufficient competence to oversee the output violates the requirement to work only within areas of competence.
Capability
Engineer A AI Tool Competence Assessment Capability Instance Performing services using an unfamiliar AI tool without assessing personal readiness violates the requirement to work only within areas of competence.
Capability
Engineer A AI Tool Competence Assessment Capability Deficient Design Tool Failing to assess competence with a novel AI drafting tool before relying on it for professional engineering documents directly violates this provision.
Capability
Engineer A Technical Writing and Report Authorship Capability Instance Self-identified limited confidence in technical writing raises questions about competence in that area of service.
Capability
Engineer A Technology As Tool Boundary Judgment Capability Deficient Allowing AI to substitute for independent engineering judgment reflects a failure to practice only within areas of genuine personal competence.
Constraint
Scope of Practice Constraint Engineer A AI Tool Reliance Beyond Competence I.2 directly creates the constraint preventing Engineer A from relying on AI outputs in areas where independent verification competence is lacking.
Constraint
Scope of Practice Constraint Engineer A AI Tool Competence I.2 requires services only within areas of competence, directly generating the constraint on use of newly marketed AI-assisted drafting tools.
Constraint
AI Tool Competence Boundary Constraint Engineer A Novel Drafting Tool I.2 creates the boundary requiring independent verification before relying on outputs from an unfamiliar AI drafting tool.
Constraint
AI Tool Competence Boundary Constraint Engineer A Novel Drafting Software I.2 limits reliance on AI drafting software outputs when Engineer A lacks prior experience with that specific tool.
Constraint
Competence Constraint Engineer A Technical Writing Self-Assessment I.2 requires competence in all service areas, directly constraining Engineer A given self-assessed limitations in technical writing.
Constraint
Peer Review Absence Compensation Constraint Engineer A Post-Engineer B Retirement I.2 requires competence maintenance, constraining Engineer A from continuing at the same scope without alternative peer review after Engineer B retired.
Constraint
Peer Review Absence Compensation Constraint Engineer A Post Engineer B Retirement I.2 requires competence maintenance, constraining Engineer A to establish alternative peer review arrangements before undertaking AI-assisted work.

Avoid deceptive acts.

Case Excerpts
discussion: "Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here." 95% confidence
Applies To (32)
Role
Engineer A AI-Assisted Engineering Practitioner Submitting AI-generated work without disclosure or adequate review could constitute a deceptive act toward the client.
Role
Engineer A Environmental Engineering Consultant Presenting AI-drafted reports as fully engineer-reviewed deliverables without proper oversight may deceive the client.
Principle
Transparency Principle Invoked By Engineer A Toward Client W Failing to disclose AI tool use to Client W constitutes a deceptive act by omission, which I.5 prohibits.
Principle
AI Tool Transparency Obligation Breached By Engineer A In Report Submission Submitting an AI-drafted report without disclosure is a deceptive act directly prohibited by I.5.
Principle
AI Tool Transparency Obligation Breached By Engineer A In Design Document Submission Submitting AI-assisted design documents without disclosure constitutes a deceptive act by omission contrary to I.5.
Principle
Intellectual Honesty In Authorship Invoked By Engineer A Report Presenting an AI-generated draft as Engineer A's own professional work without disclosure is a deceptive act prohibited by I.5.
Principle
Intellectual Integrity in Authorship Applied to AI Report Drafting Creating an implicit false impression of sole human authorship by not disclosing AI's material contribution constitutes a deceptive act under I.5.
Principle
AI Tool Transparency and Disclosure Applied to Client W Relationship Failing to proactively disclose AI's substantial contribution to deliverables is a deceptive omission prohibited by I.5.
Obligation
AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W Failing to disclose AI use in generating the report constitutes a deceptive act by misrepresenting the nature of the work product.
Obligation
AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W Failing to disclose AI use in generating design documents constitutes a deceptive act toward the client.
Obligation
Intellectual Authorship Integrity Obligation Breached By Engineer A In Report Submission Misrepresenting the authorship and provenance of the report by not disclosing AI generation is a deceptive act.
Obligation
Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission Presenting an AI-generated report without disclosing its provenance constitutes a deceptive act prohibited by this provision.
Obligation
Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Failing to proactively disclose AI contribution to the report is a deceptive act under this provision.
Obligation
Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents Failing to proactively disclose AI contribution to design documents is a deceptive act under this provision.
State
Engineer A Undisclosed AI Report Use Submitting an AI-generated report without disclosure to the client constitutes a deceptive act.
State
Engineer A Undisclosed AI Design Document Use Submitting AI-generated design documents without informing the client is a deceptive act.
State
Engineer A Undisclosed AI Report Contribution Failing to disclose AI involvement in drafting the report deceives the client about the work's origin.
State
Engineer A Undisclosed AI Attribution Gap Omitting citations and attribution for AI-generated content misleads the client about the report's basis.
Resource
AI Software Usage Disclosure Norms Failing to disclose AI tool use constitutes a deceptive act that this canon prohibits, and the absent disclosure norms highlight the violation.
Resource
Open-Source AI Drafting Software Not disclosing that the AI software was used to generate deliverables is the deceptive act this provision forbids.
Action
Submitted Report Without AI Disclosure Submitting an AI-drafted report without disclosing AI involvement is a deceptive act toward the client or public.
Event
AI Report Draft Generated Submitting an AI-generated report without disclosure could constitute a deceptive act toward the client.
Event
AI Design Documents Generated Presenting AI-generated design documents as the engineer's own work without disclosure is a deceptive act.
Event
Report Stylistic Inconsistency Detected Stylistic inconsistencies suggest the report origin was concealed, pointing to a potentially deceptive act.
Capability
Engineer A AI Disclosure and Transparency Capability Instance Failing to disclose AI use in generating deliverables constitutes a deceptive act toward the client.
Capability
Engineer A AI Disclosure Transparency Capability Deficient Proactively concealing AI contributions to both deliverables is a direct deceptive act prohibited by this provision.
Capability
Engineer A AI Attribution Citation Capability Deficient Omitting citations to authoritative documents and AI contributions in the report creates a false impression of independent authorship.
Constraint
Non-Deception Constraint Engineer A Report Authorship Representation I.5 directly creates the non-deception constraint preventing Engineer A from submitting AI-generated reports in a manner that falsely implies independent authorship.
Constraint
AI-Generated Work Product Disclosure Constraint Engineer A Report Submission I.5 requires avoiding deceptive acts, directly constraining Engineer A from submitting AI-generated reports without disclosure of AI use.
Constraint
AI-Generated Work Product Disclosure Constraint Engineer A Design Document Submission I.5 prohibits deceptive acts, constraining Engineer A from submitting AI-assisted design documents without disclosing the AI tool's involvement.
Constraint
Proactive Client Trust Transparency Constraint Engineer A Report I.5 supports the transparency constraint requiring proactive disclosure of AI software use to avoid creating a deceptive impression.
Constraint
Proactive Client Trust Transparency Constraint Engineer A Design Documents I.5 supports the transparency constraint requiring proactive disclosure of AI drafting tool use to Client W.
Section II. Rules of Practice 3 83 entities

Engineers shall not reveal facts, data, or information without the prior consent of the client or employer except as authorized or required by law or this Code.

Case Excerpts
discussion: "rmed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client." 92% confidence
Applies To (11)
Role
Engineer A Environmental Engineering Consultant Engineer A must not disclose confidential client data from the environmental report without prior consent.
Role
Engineer A AI-Assisted Engineering Practitioner Inputting confidential client data into AI software may constitute unauthorized disclosure of client information.
Principle
Client Data Confidentiality in AI Tool Use Violated by Engineer A Uploading Client W's private project information into an open-source AI interface without prior consent directly violates the prohibition on revealing client data without consent under II.1.c.
Obligation
Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A This provision prohibits revealing client facts or data without prior consent, directly governing the obligation not to upload client information into an open-source AI tool without consent.
State
Engineer A Client W Data Public Domain Exposure Uploading Client W's confidential information to an open-source AI interface discloses client data without consent.
Resource
Open-Source AI Drafting Software Inputting client data into the AI software raises concerns about unauthorized disclosure of confidential client information.
Action
Input Confidential Data into Public AI Entering confidential client data into a public AI system reveals protected information without client consent.
Event
Confidential Data Exposed to AI Inputting client confidential data into an AI system discloses that information without client consent.
Capability
Engineer A Client Data Confidentiality Management Capability Deficient Uploading client private project information into an open-source AI interface discloses confidential client data without consent, violating this provision.
Constraint
Confidential Client Data Input Constraint Engineer A Open-Source AI Upload II.1.c prohibits revealing client information without consent, directly creating the constraint against uploading Client W's confidential data into open-source AI.
Constraint
Confidential Client Data Input Constraint Engineer A Open Source AI II.1.c prohibits disclosure of client information without prior consent, directly constraining Engineer A from uploading private project information into open-source AI interfaces.

Engineers shall undertake assignments only when qualified by education or experience in the specific technical fields involved.

Case Excerpts
discussion: "the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer" 92% confidence
Applies To (35)
Role
Engineer A Environmental Engineering Consultant Engineer A must only undertake the environmental reporting assignment if qualified in the specific technical fields involved.
Role
Engineer A Groundwater Infrastructure Design Engineer Engineer A must be qualified by education or experience before undertaking groundwater infrastructure design assignments.
Role
Engineer A AI-Assisted Engineering Practitioner Engineer A must be qualified to evaluate and verify AI-generated outputs in the technical fields involved.
Principle
Professional Competence Satisfied for Report Writing But Questioned for AI Tool Verification II.2.a requires qualification in the specific technical fields involved, and Engineer A's lack of competence in verifying AI tool outputs violates this standard.
Principle
Competence Assurance Under Novel Tool Adoption Invoked By Engineer A Undertaking AI-assisted engineering work without prior experience or qualification in the AI tool violates II.2.a's requirement to be qualified for the specific technical work undertaken.
Principle
Professional Competence Invoked By Engineer A In AI Tool Selection Using AI tools without adequate competence in them to perform engineering assignments conflicts with II.2.a's requirement to be qualified by education or experience.
Principle
Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool Adopting a new AI drafting tool without ensuring sufficient understanding of its limitations violates the requirement under II.2.a to be qualified for the specific technical work.
Principle
Mentorship Continuity Obligation Invoked By Engineer A Following Engineer B Retirement Proceeding without securing alternative oversight after losing mentorship reflects a failure to ensure qualification for the assignments undertaken, as required by II.2.a.
Obligation
Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool This provision requires undertaking assignments only when qualified in the specific technical fields involved, including competence with tools used.
Obligation
AI-Generated Work Product Competence Verification Obligation Breached By Engineer A In Design Document Review This provision requires qualification in the specific technical fields involved, which includes the ability to critically evaluate AI-generated design outputs.
Obligation
AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase This provision requires qualification in the specific technical fields involved, including understanding AI tool capabilities and limitations before relying on them.
Obligation
AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review This provision requires qualification in the specific technical fields involved, relating to the obligation to adequately verify AI-generated report content.
Obligation
Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report This provision requires qualification in the specific technical fields involved, which underpins the obligation to ensure the report is founded on established facts and professional analysis.
State
Engineer A Unfamiliar AI Tool Deployment Undertaking work using an AI tool without prior experience violates the requirement to be qualified in the technical fields involved.
State
Engineer A AI Drafting Tool Unfamiliarity Using a newly marketed AI drafting tool without adequate experience means the assignment was undertaken without proper qualification.
State
Engineer A Self-Assessed Technical Writing Limitation Proceeding with technical writing despite recognized personal limitations violates the duty to only undertake assignments when qualified.
State
Engineer A Mentor Support Absent Continuing practice without mentorship support in areas of weakness raises questions about qualification for those assignments.
Resource
Open-Source AI Drafting Software Engineer A must be qualified in the technical fields involved before undertaking assignments that rely on AI-generated outputs.
Resource
BER-Case-98-3 This precedent directly addresses the requirement that engineers be competent in the technical fields when using technology tools.
Resource
Professional Journal Articles on Emerging Contaminants Engineer A used these articles to verify AI outputs, reflecting the need for qualified technical knowledge to assess the subject matter.
Action
Chose AI for Report Drafting Undertaking report drafting using AI tools requires qualification in their use, which the engineer may lack.
Action
Used AI for Design Document Generation Undertaking AI-based design document generation requires demonstrated competence in that technical approach.
Event
AI Report Draft Generated Undertaking the report assignment using AI suggests the engineer may lack the qualified expertise to produce the work independently.
Event
AI Design Documents Generated Relying on AI to generate design documents indicates the engineer may not be qualified in the specific technical field involved.
Capability
Engineer A AI Tool Competence Assessment Capability Instance This provision requires qualification by education or experience before undertaking assignments, directly implicating the failure to assess AI tool competence.
Capability
Engineer A AI Tool Competence Assessment Capability Deficient Design Tool Undertaking engineering design work using an unfamiliar AI tool without verified qualification violates the requirement to be qualified in the specific technical field involved.
Capability
Engineer A Domain Expertise Environmental Engineering Instance Strong domain expertise in environmental engineering supports qualification to undertake the environmental assignment under this provision.
Capability
Engineer A Technical Writing and Report Authorship Capability Instance Limited independent technical writing capability raises questions about qualification to undertake report authorship assignments.
Constraint
Scope of Practice Constraint Engineer A AI Tool Reliance Beyond Competence II.2.a requires qualification in specific technical fields, directly constraining reliance on AI outputs where Engineer A cannot independently verify results.
Constraint
Scope of Practice Constraint Engineer A AI Tool Competence II.2.a requires undertaking assignments only when qualified, constraining Engineer A to demonstrate competence before using newly marketed AI drafting tools.
Constraint
AI Tool Competence Boundary Constraint Engineer A Novel Drafting Tool II.2.a requires qualification in specific technical fields, constraining Engineer A from relying on novel AI tool outputs without independent verification.
Constraint
AI Tool Competence Boundary Constraint Engineer A Novel Drafting Software II.2.a requires qualification before undertaking assignments, constraining reliance on open-source AI drafting software without prior experience.
Constraint
Competence Constraint Engineer A Technical Writing Self-Assessment II.2.a requires qualification by education or experience, constraining Engineer A given self-assessed limitations in technical writing competence.
Constraint
Peer Review Absence Compensation Constraint Engineer A Post-Engineer B Retirement II.2.a requires qualification for assignments undertaken, constraining Engineer A to establish alternative review arrangements to maintain required competence level.
Constraint
Peer Review Absence Compensation Constraint Engineer A Post Engineer B Retirement II.2.a requires qualification for specific technical assignments, constraining Engineer A to arrange alternative peer review before continuing AI-assisted work.

Engineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.

Case Excerpts
discussion: "rformed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents ." 95% confidence
discussion: "ngineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their di" 95% confidence
Applies To (37)
Role
Engineer A Engineer in Responsible Charge Engineer A must not sign or seal documents not prepared under their active direction and control, which was compromised by over-reliance on AI.
Role
Engineer A Groundwater Infrastructure Design Engineer Engineer A must not affix their signature to AI-assisted design documents containing errors they failed to detect.
Role
Engineer A Environmental Engineering Consultant Engineer A must not sign the environmental report if it was not prepared under their direct supervision and control.
Principle
Professional Accountability Invoked By Engineer A Sealing AI-Generated Documents II.2.b prohibits affixing a seal to documents not prepared under the engineer's direction and control, which is implicated when Engineer A sealed AI-generated documents.
Principle
Responsible Charge Engagement Invoked By Engineer A Over Design Documents Sealing AI-generated design documents after only a cursory review violates II.2.b's requirement that sealed documents be prepared under the engineer's direction and control.
Principle
Responsible Charge Engagement Invoked By Engineer A Over Environmental Report Applying a professional seal to an AI-drafted report without substantive direction and control over its preparation conflicts with II.2.b.
Principle
Responsible Charge Engagement Violated Through AI Over-Reliance Failing to maintain active substantive engagement in AI-generated document development while still sealing those documents violates II.2.b's direction and control requirement.
Principle
Diligent Verification of AI-Generated Technical Outputs Violated in Design Phase Sealing design documents after only a high-level review rather than comprehensive verification violates II.2.b's requirement that sealed documents be prepared under the engineer's direction and control.
Obligation
Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents This provision prohibits signing documents not prepared under the engineer's direction and control, directly relating to the obligation to actively review AI-generated design documents before signing.
Obligation
Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report This provision prohibits signing documents not prepared under the engineer's direction and control, relating to the obligation to actively review the AI-drafted report.
Obligation
Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents This provision directly prohibits affixing signatures to documents not prepared under the engineer's direction and control, which is violated when AI-generated documents are signed without substantive review.
Obligation
AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents This provision requires that signed documents be prepared under the engineer's direction and control, necessitating comprehensive verification of AI-generated content before signing.
Obligation
Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance This provision requires documents to be prepared under the engineer's direction and control, which is violated when AI output substitutes for independent engineering judgment.
Obligation
Intellectual Authorship Integrity Obligation Breached By Engineer A In Report Submission This provision prohibits signing documents not prepared under the engineer's direction and control, directly relating to misrepresenting authorship of an AI-generated report.
Obligation
Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission This provision prohibits affixing signatures to documents not prepared under the engineer's direction and control, which is implicated when AI authorship is not disclosed.
State
Engineer A Non-Compliant AI Design Documents Affixing a seal to AI-generated design documents containing errors that were not prepared under adequate direction and control violates this provision.
State
Engineer A Insufficient Responsible Charge Conducting only a high-level review without true direction and control means documents were not prepared under the engineer's responsible charge.
State
AI-Generated Design Documents Non-Compliant State Signing off on AI-generated documents with known errors indicates lack of competence and control over the subject matter.
State
Engineer A Undisclosed AI Design Document Use Submitting AI-generated design documents without proper oversight means they were not prepared under the engineer's direction and control.
Resource
State Professional Engineering Seal Law The seal law governs the conditions under which Engineer A may affix their seal, directly intersecting with the requirement that sealed documents be under the engineer's direction and control.
Resource
Open-Source AI Drafting Software Documents generated by the AI software were not fully prepared under Engineer A's direction and control, implicating the prohibition on sealing such documents.
Resource
BER-Case-90-6 This precedent established that signing documents created with technology tools is ethical when the engineer exercises proper direction and control, directly relevant to this provision.
Resource
BER-Case-98-3 This precedent reinforces that the engineer must maintain direction and control over technology-assisted work before affixing their seal.
Action
Conducted Cursory Design Document Review Signing off on AI-generated design documents after only a cursory review means affixing a signature to documents not adequately prepared under the engineers direction and control.
Action
Submitted Report Without AI Disclosure Submitting an AI-drafted report as ones own work implies the document was prepared under the engineers direction and control when it was not.
Event
AI Report Draft Generated Signing off on an AI-generated report not prepared under the engineer's direct control violates this provision.
Event
AI Design Documents Generated Affixing a signature to AI-generated design documents not prepared under the engineer's direction and control directly violates this provision.
Event
Design Document Defects Discovered Defects in signed documents confirm the engineer did not exercise adequate direction and control over their preparation.
Capability
Engineer A Responsible Charge Active Review Capability Instance Signing documents requires they be prepared under the engineer's direction and control, which responsible charge review directly governs.
Capability
Engineer A AI Output Verification Capability Design Documents Instance Affixing a signature to AI-generated design documents after only cursory review means the documents were not adequately under the engineer's direction and control.
Capability
Engineer A Responsible Charge Active Engagement Capability Deficient Failure to actively engage in responsible charge means signed documents were not truly prepared under the engineer's direction and control.
Capability
Engineer A AI Output Verification Capability Deficient Design Documents Signing design documents without substantive verification means the engineer lacked competence to certify the subject matter.
Capability
Engineer A Technology As Tool Boundary Judgment Capability Deficient Allowing AI to effectively author documents rather than serve as a tool means signed documents were not prepared under the engineer's direction and control.
Constraint
Responsible Charge Verification Constraint Engineer A Design Documents II.2.b prohibits sealing documents not prepared under direction and control, directly creating the constraint requiring substantive review before sealing AI-generated design documents.
Constraint
AI Direction Control Constraint Engineer A Report II.2.b requires direction and control over documents bearing Engineer A's signature, constraining active oversight of AI-generated report content.
Constraint
AI Direction Control Constraint Engineer A Design Documents II.2.b requires that signed documents be prepared under the engineer's direction and control, constraining Engineer A to conduct comprehensive verification of AI-assisted design documents.
Constraint
Technology Non-Substitution Constraint Engineer A Design Phase II.2.b requires documents to be prepared under the engineer's direction and control, constraining Engineer A from substituting AI judgment for independent engineering judgment.
Section III. Professional Obligations 3 41 entities

Engineers shall avoid all conduct or practice that deceives the public.

Applies To (19)
Role
Engineer A AI-Assisted Engineering Practitioner Presenting AI-generated documents as fully engineer-reviewed work without adequate verification deceives the public.
Role
Engineer A Environmental Engineering Consultant Submitting flawed AI-drafted reports as professionally vetted work constitutes conduct that deceives the public.
Principle
AI Tool Transparency Obligation Breached By Engineer A In Report Submission Submitting an AI-drafted report to a client without disclosure deceives the public and client about the nature of the work product, violating III.3.
Principle
AI Tool Transparency Obligation Breached By Engineer A In Design Document Submission Submitting AI-assisted design documents without disclosure constitutes conduct that deceives the public about the authorship and verification of engineering deliverables.
Principle
Intellectual Honesty In Authorship Invoked By Engineer A Report Presenting an AI-generated draft as Engineer A's own professional work deceives the public about the true nature of the engineering work product, violating III.3.
Principle
Intellectual Integrity in Authorship Applied to AI Report Drafting Creating a false impression of sole human authorship through non-disclosure of AI's material contribution constitutes deceptive conduct toward the public under III.3.
Principle
AI Tool Transparency and Disclosure Applied to Client W Relationship Failing to disclose AI's substantial contribution to engineering deliverables constitutes deceptive practice toward the client and public, prohibited by III.3.
State
Engineer A Undisclosed AI Report Use Presenting an AI-generated report as the engineer's own work without disclosure deceives the public and client.
State
Engineer A Undisclosed AI Design Document Use Submitting AI-generated design documents without disclosure is a deceptive practice toward the public.
State
Engineer A Undisclosed AI Attribution Gap Omitting attribution for AI-generated content in a professional report constitutes deceptive conduct toward the public.
State
Engineer A Undisclosed AI Report Contribution Concealing AI involvement in drafting a professional report deceives the public about the nature of the engineering work.
Resource
AI Software Usage Disclosure Norms The absent disclosure norms represent the standard whose violation results in deceiving the public about the nature and origin of the engineering work.
Resource
Open-Source AI Drafting Software Presenting AI-generated documents as fully engineer-authored without disclosure constitutes conduct that deceives the public.
Action
Submitted Report Without AI Disclosure Presenting an AI-generated report without disclosure deceives the public about the nature and authorship of the engineering work.
Event
AI Report Draft Generated Presenting an AI-generated report to the public or client without disclosure constitutes deceptive conduct toward the public.
Event
Report Stylistic Inconsistency Detected The inconsistency suggests the true origin of the report was hidden, which is conduct that deceives the public.
Capability
Engineer A AI Disclosure and Transparency Capability Instance Non-disclosure of AI use in professional deliverables deceives the public and client about the nature and authorship of the work.
Capability
Engineer A AI Disclosure Transparency Capability Deficient Failing to disclose AI contributions to deliverables presented as professional engineering work constitutes deception of the public.
Capability
Engineer A AI Attribution Citation Capability Deficient Omitting citations and AI attribution creates a false public impression of independent professional authorship.

Engineers shall conform with state registration laws in the practice of engineering.

Case Excerpts
discussion: "Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a." 95% confidence
Applies To (8)
Role
Engineer A Engineer in Responsible Charge Engineer A bears statutory responsible charge obligations and must conform with state registration laws governing engineering practice.
Role
Engineer A Groundwater Infrastructure Design Engineer Engineer A must comply with state registration laws when sealing and submitting engineering design documents.
State
Engineer A Regulatory Compliance Obligation Engineer A has a direct obligation to conform with state registration laws and local regulations when sealing engineering documents.
State
Engineer A Non-Compliant AI Design Documents Submitting design documents that omit features required by local regulations violates the duty to conform with applicable laws.
State
Public Safety Risk from Design Omissions Omitting safety features required by local regulations represents a failure to conform with applicable regulatory requirements.
Resource
State Professional Engineering Seal Law This provision requires conformance with state registration laws, and the seal law directly defines the legal conditions Engineer A must follow when sealing AI-assisted documents.
Capability
Engineer A Regulatory Compliance Verification Capability Instance Verifying that design documents comply with applicable local regulations is directly required by the obligation to conform with state registration and practice laws.
Capability
Engineer A Regulatory Compliance Verification Capability Deficient Failing to verify regulatory compliance resulting in omission of required safety elements constitutes a failure to conform with applicable registration and practice laws.

Engineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.

Case Excerpts
discussion: "Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations." 95% confidence
discussion: "AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI" 95% confidence
Applies To (14)
Role
Engineer A AI-Assisted Engineering Practitioner Engineer A must give appropriate credit and recognize proprietary interests when using AI tools that generate engineering content.
Role
Engineer A Environmental Engineering Consultant Engineer A must acknowledge the role of AI-generated content in the report and respect any associated proprietary interests.
State
Engineer A Undisclosed AI Attribution Gap Failing to cite technical authority and attribute AI-generated content denies proper credit to the sources of the engineering work.
State
Engineer A Undisclosed AI Report Contribution Not disclosing AI involvement in drafting the report fails to give credit to the AI tool's contribution to the work.
State
Engineer A Undisclosed AI Design Document Use Presenting AI-generated design documents without attribution fails to recognize the proprietary interests and contributions of the AI tool.
Resource
Open-Source AI Drafting Software Using AI-generated content without acknowledgment raises questions about giving proper credit and recognizing the proprietary interests associated with the AI tool's outputs.
Resource
AI Software Usage Disclosure Norms The absent disclosure norms are the standard that would require Engineer A to credit the AI tool and recognize any proprietary interests in its generated content.
Action
Submitted Report Without AI Disclosure Failing to disclose AI involvement denies proper credit and recognition to the AI tool and misrepresents the origin of the work.
Action
Used AI for Design Document Generation Using AI to generate design documents without acknowledgment fails to recognize the proprietary interests and contributions of the AI system or its developers.
Event
AI Report Draft Generated Failing to credit the AI tool or acknowledge its role in generating the report raises issues of proper attribution.
Event
AI Design Documents Generated Not acknowledging the AI system as the source of the design documents fails to recognize the proprietary and creative origins of the work.
Capability
Engineer A AI Attribution Citation Capability Deficient Failing to cite authoritative documents and disclose AI contributions denies credit to relevant sources and ignores proprietary interests of others.
Capability
Engineer A AI Disclosure and Transparency Capability Instance Non-disclosure of AI tool use fails to give appropriate credit to the AI system's contribution to the engineering work product.
Capability
Engineer A AI Disclosure Transparency Capability Deficient Presenting AI-generated content without attribution fails to recognize the proprietary interests and contributions of the AI tool provider and source materials.
Cross-Case Connections
View Extraction
Explicit Board-Cited Precedents 2 Lineage Graph

Cases explicitly cited by the Board in this opinion. These represent direct expert judgment about intertextual relevance.

Principle Established:

It is unethical for an engineer to offer services using new technology in areas where they lack competence and experience; technology has an important place in engineering practice but must never be a replacement or substitute for engineering judgment.

Citation Context:

The Board cited this case to establish that technology must never replace or substitute for engineering judgment, and to draw a parallel to Engineer A's insufficient review of AI-generated design documents, while also distinguishing Engineer A's situation by noting Engineer A is not incompetent unlike the engineer in that case.

Relevant Excerpts
discussion: "BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read "Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse""
discussion: "it is the BER's view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has "several years of experience" and "strong technical expertise.""
discussion: "The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement."
discussion: "BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others."

Principle Established:

It is ethical for an engineer to sign and seal documents created using a CADD system, whether prepared by the engineer themselves or by others working under their direction and control, provided the engineer has the requisite background, education, and training to be proficient with the technology and its limitations.

Citation Context:

The Board cited this case to establish historical precedent for the ethical use of computer-assisted drafting and design tools, and to show the BER's longstanding openness to new technologies in engineering practice, including early anticipation of AI.

Relevant Excerpts
discussion: "Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer's use of computer assisted drafting and design tools."
discussion: "In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control."
Implicit Similar Cases 10 Similarity Network

Cases sharing ontology classes or structural similarity. These connections arise from constrained extraction against a shared vocabulary.

Component Similarity 51% Facts Similarity 45% Discussion Similarity 42% Provision Overlap 38% Outcome Alignment 50% Tag Overlap 46%
Shared provisions: II.1.b, II.2.a, II.2.b, III.1.a, III.3.a View Synthesis
Component Similarity 51% Facts Similarity 42% Discussion Similarity 53% Provision Overlap 36% Outcome Alignment 50% Tag Overlap 55%
Shared provisions: I.2, II.1.b, II.2.a, II.2.b, III.1.a View Synthesis
Component Similarity 47% Facts Similarity 40% Discussion Similarity 48% Provision Overlap 15% Outcome Alignment 100% Tag Overlap 46%
Shared provisions: II.1.b, III.1.a Same outcome True View Synthesis
Component Similarity 51% Facts Similarity 45% Discussion Similarity 51% Provision Overlap 33% Outcome Alignment 50% Tag Overlap 50%
Shared provisions: II.2, II.2.a, II.2.b View Synthesis
Component Similarity 52% Facts Similarity 46% Discussion Similarity 52% Provision Overlap 31% Outcome Alignment 50% Tag Overlap 50%
Shared provisions: I.2, II.2, II.2.b, III.1.a View Synthesis
Component Similarity 49% Facts Similarity 36% Discussion Similarity 52% Provision Overlap 36% Outcome Alignment 50% Tag Overlap 44%
Shared provisions: I.2, II.2, II.2.a, II.2.b View Synthesis
Component Similarity 54% Facts Similarity 46% Discussion Similarity 52% Provision Overlap 20% Outcome Alignment 50% Tag Overlap 62%
Shared provisions: II.2.a, II.2.b View Synthesis
Component Similarity 56% Facts Similarity 37% Discussion Similarity 68% Provision Overlap 18% Outcome Alignment 50% Tag Overlap 56%
Shared provisions: II.2.b, III.1.a View Synthesis
Component Similarity 54% Facts Similarity 35% Discussion Similarity 40% Provision Overlap 25% Outcome Alignment 50% Tag Overlap 44%
Shared provisions: I.2, III.1.a, III.5 View Synthesis
Component Similarity 45% Facts Similarity 46% Discussion Similarity 42% Provision Overlap 31% Outcome Alignment 50% Tag Overlap 56%
Shared provisions: I.2, II.2, II.2.a, II.2.b View Synthesis
Questions & Conclusions
View Extraction
Each question is shown with its corresponding conclusion(s). Board questions are expanded by default.
Decisions & Arguments
View Extraction
Causal-Normative Links 6
Fulfills
  • Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
  • AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review
  • Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
Violates
  • AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
  • Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
Fulfills None
Violates
  • AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
  • AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
  • Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
  • Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
  • Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
  • Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
  • AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W
  • Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents
  • Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement
Fulfills None
Violates
  • Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A
  • Client Consent for Third-Party Data Sharing Obligation
Fulfills None
Violates
  • AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
  • AI Tool Disclosure Obligation
  • Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
  • Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
  • AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
Fulfills None
Violates
  • Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents
  • AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
  • AI-Generated Work Product Competence Verification Obligation Breached By Engineer A In Design Document Review
  • AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
  • Regulatory Compliance Verification Obligation Breached By Engineer A In Design Document Submission
  • Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
  • Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
  • Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
  • Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
Fulfills
  • Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
Violates
  • Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool
  • Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement
Decision Points 18

When Client W directly observed that the environmental report appeared to have been written by two different authors, should Engineer A proactively disclose the AI's generative role, or treat the AI as an internal productivity tool and disclose only if directly asked?

Options:
Proactively Disclose AI Role To Client Board's choice Proactively disclose the AI's generative role to Client W at the time of report submission, or immediately upon Client W's stylistic observation, identifying which sections were AI-generated, citing the journal articles used for cross-checking, and attributing AI contributions transparently. This approach treats Client W's reasonable expectation of authorship transparency, triggered by the direct observation, as activating an affirmative duty of candor under Code provisions I.5 and III.3.
Treat AI As Internal Productivity Tool Treat the AI drafting software as an internal productivity tool equivalent to grammar-checking or reference-management software, disclose only upon direct client inquiry, and rely on the professional seal as the operative representation of Engineer A's responsibility for the report's accuracy and conclusions. This approach holds that no freestanding obligation to disclose AI tool use exists absent an explicit client requirement or a Code provision mandating such disclosure.
Acknowledge Automated Assistance Without Specifics Respond to Client W's stylistic observation by acknowledging that the report was produced using a combination of automated drafting assistance and independent professional review, without specifying the AI tool by name or identifying which sections were AI-generated. This option attempts a middle path, avoiding outright concealment while stopping short of full attribution, but risks sustaining the materially false impression that the report was independently authored.
Toulmin Summary:
Warrants I.5 III.3 III.9

Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception can arise from deliberate silence where a reasonable client would expect disclosure and where omission sustains a materially false impression. Code provision III.9 requires engineers to give credit for engineering work to those to whom credit is due, extending to the intellectual and evidentiary sources that substantiate technical conclusions. The Intellectual Authorship Integrity Obligation requires that a professional seal represent not merely quality certification but intellectual ownership and responsible charge over the work's expression. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify, and silence at that moment transformed a prior omission into an active, ongoing misrepresentation. The Responsible Charge Active Review Obligation was partially met by Engineer A's thorough factual verification, but that verification does not discharge the separate authorship attribution obligation.

Rebuttals

Uncertainty arises because no settled NSPE Code provision at the time of the engagement explicitly mandated AI tool disclosure, and the Board concluded there is no universal freestanding obligation to disclose AI use analogous to disclosing other engineering software. A rebuttal condition exists that Engineer A's thorough review may have sufficiently transformed the AI draft into Engineer A's own professionally accountable work product, such that the seal certifies technical accuracy and responsible charge rather than personal prose authorship. Additionally, the duty of candor may not extend to every tool or method used in professional practice: engineers are not obligated to disclose use of spell-checkers, grammar tools, or reference databases, leaving open whether AI drafting tools occupy a categorically different position absent explicit Code guidance.

Grounds

Engineer A used open-source AI software to generate the initial draft of the environmental groundwater monitoring report, made only minor wording adjustments to personalize the content, conducted a thorough factual review cross-checking AI-generated claims against professional journal articles, and submitted the report to Client W under a professional seal without disclosing AI involvement. Client W independently observed that the report appeared to have been written by two different authors, a stylistically accurate description of the report's dual-origin nature, and Engineer A did not respond by acknowledging the AI's generative role. The report did not cite the journal articles used for cross-checking, nor did it attribute any content to AI generation.

Should Engineer A conduct a rigorous, line-by-line technical review of the AI-generated design documents before sealing them, or is a standard QA protocol sufficient, and if neither is adequate alone, should Engineer A bring in an independent peer reviewer?

Options:
Conduct Rigorous Line-By-Line Technical Review Board's choice Conduct a rigorous, line-by-line technical review of all AI-generated design documents: verifying each dimension against site survey data, confirming each specification against applicable local regulations, and resolving any discrepancy before applying the professional seal. This approach treats unfamiliarity with the AI tool's internal logic as a reason to intensify outcome-based verification rather than rely on the tool's apparent outputs.
Apply Standard QA Protocol For AI Outputs Apply the firm's standard QA protocol for AI-generated design documents at the same review depth used for conventional CAD-produced drawings, treating the AI tool as a drafting productivity aid and relying on Engineer A's domain expertise in groundwater infrastructure to catch material errors. This approach accepts that unfamiliarity with the tool's generative logic does not independently disqualify the review if the outcome-based check is consistent with normal professional practice.
Engage Independent Peer Reviewer For Verification Engage a qualified peer reviewer or subconsultant with experience in both groundwater infrastructure design and AI-assisted drafting tools to independently verify the AI-generated design documents before Engineer A applies the professional seal. This option treats Engineer A's unfamiliarity with the tool's capabilities and failure modes as a competence gap that cannot be closed by intensified self-review alone on a safety-critical project.
Toulmin Summary:
Warrants I.1 I.2 II.2.a II.2.b

Code provision I.1 places public safety, health, and welfare as the paramount obligation of a licensed engineer, and this obligation is not aspirational, it is the foundational constraint against which all professional judgments must be measured. Code provisions I.2 and II.2.a require engineers to perform services only within areas of their competence, and this obligation extends to the tools they deploy: competence encompasses not only domain knowledge but also sufficient understanding of the AI tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. The professional seal certifies that the engineer has exercised responsible charge, understood, directed, and can stand behind the work's technical adequacy. A cursory review of output from a novel tool whose generative logic the engineer does not fully understand cannot satisfy that standard. The Engineering Judgment Non-Substitution Obligation requires that AI tools supplement rather than substitute for independent professional engineering judgment. The Mentorship Succession and Peer Review Continuity Obligation required Engineer A to arrange alternative peer review when Engineer B retired, rather than substituting an unvalidated AI tool for that professional oversight.

Rebuttals

Uncertainty arises because responsible charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were conducted, the engineer's unfamiliarity with the tool's internal logic might not independently defeat responsible charge. Additionally, no settled professional standard at the time of the engagement explicitly defined what constitutes 'sufficient' review of AI-generated design documents, leaving open whether a high-level review by an engineer with strong domain expertise in groundwater infrastructure could satisfy the standard for outputs that fall within that domain. A further rebuttal condition is that the safety omissions and dimensional errors might have been of a type detectable through standard engineering review protocols regardless of the generative tool used, meaning the tool's novelty may not have been the operative variable, the review depth was.

Grounds

Engineer A used a newly released AI-assisted drafting tool: with no prior experience and without fully understanding its capabilities, limitations, or failure modes, to generate preliminary engineering design documents for groundwater infrastructure modifications. Engineer A conducted only a cursory, high-level review of the AI-generated documents before applying a professional seal and submitting them to Client W. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations. These defects were identified by Client W, not by Engineer A's review. Engineer B, who had previously provided quality assurance review of Engineer A's work, had retired and was no longer available. The AI drafting tool was new to the market, and Engineer A had no prior experience with it.

Should Engineer A obtain Client W's prior informed consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed using technical safeguards or platform substitution without seeking consent?

Options:
Investigate Platform and Obtain Client Consent Board's choice Investigate the open-source AI platform's data handling, retention, and privacy policies before use, and obtain Client W's explicit prior informed consent to upload confidential site data to the platform. This treats the consent obligation as affirmative and non-contingent under Code provision II.1.c.
Anonymize Data Before Uploading to Platform Use the open-source AI platform with anonymized or de-identified project data, substituting generic site descriptors for Client W's proprietary identifiers, so that AI-assisted synthesis does not expose confidential information. This treats technical de-identification as a sufficient substitute for prior client consent.
Substitute Privacy-Compliant Enterprise AI Platform Replace the open-source tool with a privacy-compliant enterprise AI platform offering contractual data protection guarantees, or a locally deployed model with no external data transmission. This treats platform substitution as resolving the confidentiality risk without requiring explicit client consent.
Toulmin Summary:
Warrants II.1.c

Code provision II.1.c requires engineers to treat information obtained in the course of a professional engagement as confidential and not to disclose it without the client's consent. Uploading confidential client data to an open-source AI platform is tantamount to placing that information in the public domain, because the engineer cannot control how the platform processes, retains, or transmits the data. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. A competent engineer deploying any novel third-party software tool, particularly a newly released, open-source platform with unknown data handling practices, bears an affirmative pre-use obligation to investigate whether inputting confidential client data is permissible under the client relationship and to obtain explicit client consent if any confidential information will be transmitted to a third-party system. This violation is not remediated by the thoroughness of the subsequent report review, the accuracy of the final work product, or any disclosure or non-disclosure decision regarding AI authorship. The confidentiality breach stands as a discrete, self-contained ethical violation.

Rebuttals

Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third-party recipient in the conventional sense. A further rebuttal condition exists: if the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk of harm insufficient to constitute a breach. Additionally, if Engineer A had obtained Client W's informed consent to use the AI platform, which did not occur here but represents a compliant pathway, the confidentiality obligation would have been satisfied, suggesting the violation is procedural rather than categorical.

Grounds

Engineer A gathered Client W's confidential site data and groundwater monitoring information and uploaded it into an open-source AI platform to synthesize the information for the environmental report. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, storage, and privacy policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets. Engineer A did not obtain Client W's prior consent before uploading the confidential data, and did not investigate the platform's data handling practices before use. The confidential data included site-specific environmental information that may have regulatory, litigation, or competitive sensitivity.

Should Engineer A proactively disclose the AI tool's generative role to Client W, including which sections it drafted, or treat the AI as an internal drafting tool requiring no special disclosure?

Options:
Disclose AI Authorship Fully and Immediately Board's choice Proactively identify AI-drafted sections to Client W, explain the cross-verification process used against professional journal articles, and acknowledge AI involvement directly when Client W raises the dual-author observation. This treats the AI's generative role as a material fact a reasonable client would expect to know under Code provisions I.5 and III.3.
Treat AI as Internal Tool, Omit Disclosure Treat the AI drafting tool as functionally equivalent to word-processing or CAD software and omit any disclosure from the report or client communications. Respond to Client W's stylistic observation by attributing the tonal variation to the engineer's own drafting process rather than AI involvement.
Add General Methodology Note, Disclose Only If Asked Include a brief methodology note in the report acknowledging use of AI-assisted drafting software without section-level attribution, and provide fuller disclosure to Client W only if directly and specifically asked about the authorship of individual sections.
Toulmin Summary:
Warrants I.5 III.3 III.9

Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Code provision III.9 requires attribution of intellectual and evidentiary sources. The professional seal implicitly represents personal authorship and responsible charge over the work's expression, not merely its factual accuracy. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify. Competing against these is the Board's conclusion that AI tools are analogous to other engineering software (CAD, FEA) and that no universal disclosure obligation exists absent contractual requirement or affirmative misrepresentation, and that Engineer A's thorough review satisfied the competence dimension of responsible charge.

Rebuttals

Uncertainty arises because no settled NSPE Code provision at the time of the engagement explicitly mandated AI tool disclosure, and the analogy to conventional software has genuine force: engineers do not routinely disclose every drafting or analysis tool used. The rebuttal condition, whether Engineer A's thorough review sufficiently transforms AI-generated prose into Engineer A's own professional work product, is unresolved by existing professional standards. Additionally, if the duty of candor does not extend to disclosure of every tool or method, silence about AI use may not constitute deception per se. However, the specific moment of Client W's authorship observation distinguishes this case from routine non-disclosure: silence at that moment allowed a materially false impression to persist.

Grounds

Engineer A used an open-source AI tool to draft the environmental report, then made minor wording adjustments and applied their professional seal without disclosing AI involvement. Engineer A conducted a thorough factual review, cross-checking AI-generated content against professional journal articles. Client W independently observed that the report appeared to have been written by two different authors, an accurate description of its dual-origin nature. Engineer A did not respond to this observation by acknowledging the AI's role.

Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's prior written consent before uploading confidential site data, or may Engineer A proceed using anonymization or treat the platform as equivalent to local software?

Options:
Investigate Platform and Obtain Written Consent Board's choice Investigate the open-source AI platform's data handling, retention, and privacy policies before use, disclose the intended use to Client W, and obtain explicit written consent for transmitting confidential site data. This treats the consent obligation as affirmative and independent of downstream work product quality under Code provision II.1.c.
Anonymize Data as Confidentiality Safeguard Anonymize or redact site-identifying information from the data inputs before uploading to the AI platform, treating this technical measure as a sufficient safeguard against confidentiality breach without requiring explicit client consent. This approach substitutes de-identification for the prior consent obligation.
Proceed Treating AI as Local Software Equivalent Proceed with uploading client data to the AI platform on the basis that AI processing is instrumentally equivalent to locally installed analysis software, and that no separate consent obligation is triggered. This approach treats the platform as a routine engineering tool rather than a third-party data recipient under Code provision II.1.c.
Toulmin Summary:
Warrants II.1.c

Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on downstream work product quality or accuracy. Open-source AI platforms process user-submitted data in ways that may expose it to third parties, retain it, or incorporate it into training datasets, consequences outside Engineer A's control. A competent engineer deploying any novel third-party tool with client data bears a pre-use obligation to investigate data handling policies and obtain explicit client consent before transmitting confidential information. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. Competing against this is the argument that uploading data to an AI platform may not constitute 'disclosure to a third party' within the meaning of II.1.c if the data was processed algorithmically without human access, and that the efficiency benefit of AI-assisted drafting is a legitimate professional interest.

Rebuttals

Uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally rather than shared with an identifiable person. If the platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the risk negligible. Additionally, if Engineer A reasonably but incorrectly believed the platform operated with the same data isolation as locally installed software, the breach might be characterized as a competence failure rather than a deliberate confidentiality violation, though this does not eliminate the ethical breach.

Grounds

Engineer A uploaded Client W's proprietary site characterization data and groundwater monitoring information into an open-source AI platform without first obtaining Client W's consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, retention, and potential training data incorporation practices. The data was site-specific, potentially sensitive for regulatory, litigation, or competitive purposes. No contractual provision authorized transmission of client data to third-party systems.

After losing Engineer B's peer review function, should Engineer A perform a rigorous independent technical review of all AI-generated documents before sealing them, apply the existing QA protocol treating the AI tool as equivalent to conventional drafting software, or engage a third-party AI-experienced reviewer to fill the oversight gap?

Options:
Perform Rigorous Independent Line-By-Line Review Board's choice Conduct a comprehensive, independent technical review of all AI-generated design documents, verifying each dimension against site survey data and each specification against local regulatory requirements, before sealing, recognizing that the loss of Engineer B's peer review and unfamiliarity with the AI tool together demand heightened personal scrutiny.
Apply Standard QA Protocol As-Is Apply the firm's standard QA protocol for design documents to the AI-generated outputs at the same review depth previously used for conventional CAD-produced drawings, on the basis that domain competence rather than tool familiarity is the operative standard for responsible charge.
Engage Third-Party AI-Experienced Reviewer Engage a third-party reviewer with demonstrated experience in AI-generated engineering documents to audit safety-critical and regulatory compliance elements of the design outputs, filling the oversight gap left by Engineer B's retirement while compensating for Engineer A's lack of tool-specific expertise.
Toulmin Summary:
Warrants I.1 I.2 II.2.a II.2.b

Code provisions I.2 and II.2.a require engineers to perform services only within areas of their competence, and this obligation extends to the tools deployed: competence encompasses sufficient understanding of a tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. The professional seal certifies responsible charge: that the engineer has directed the work, understood its content, and can stand behind its technical adequacy. A cursory review of output from a novel tool whose generative logic the engineer does not understand cannot satisfy this standard. Code provision I.1 places public safety as the paramount obligation, and sealing documents with regulatory safety omissions after only cursory review directly implicates this obligation. The loss of Engineer B's peer review created an affirmative obligation to arrange a functionally equivalent alternative, not to substitute an untested AI tool for professional oversight.

Rebuttals

Uncertainty is created by the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public safety provisions. Responsible charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous review could theoretically have caught the errors, the question becomes whether the review actually performed was adequate, not whether the engineer understood the AI's generative logic. Additionally, if the AI tool were sufficiently mature and well-documented, and its outputs independently verifiable by Engineer A's existing domain expertise in groundwater infrastructure, the novelty of the tool alone might not establish incompetence. The rebuttal condition, whether a more rigorous review would have caught the defects, is addressed by the counterfactual analysis suggesting it would have.

Grounds

Engineer B retired, removing the quality assurance and peer review function Engineer A had structurally depended upon. Engineer A then used a newly released, open-source AI drafting tool, with no prior experience, to generate engineering design documents for a dual-scope engagement. Engineer A conducted only a cursory, high-level review of the AI-generated design documents before affixing their professional seal. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations: defects that Client W, not Engineer A, identified. Had Client W not conducted an independent review, the deficient documents could have proceeded to construction.

What standard of review must Engineer A apply to AI-generated design documents before affixing a professional seal, given unfamiliarity with the AI drafting tool and the safety-critical nature of the outputs?

Options:
Conduct Rigorous Line-by-Line Technical Review Board's choice Conduct a rigorous, line-by-line technical review of all AI-generated design documents: verifying each dimension against site survey data, each specification against local regulatory requirements, and confirming the presence of all required safety features, before affixing the professional seal
Apply Standard QA Protocol to AI Outputs Apply the firm's standard QA protocol for conventionally drafted design documents to AI-generated outputs, treating the AI tool as functionally equivalent to CAD software and conducting a high-level review consistent with that analogy
Engage Third-Party Reviewer for Critical Elements Engage a qualified third-party reviewer with domain expertise to independently verify safety-critical and regulatory compliance elements of the AI-generated design documents while applying standard review to non-safety-critical components
Toulmin Summary:
Warrants I.1 II.2.a II.2.b

The professional seal legally and ethically certifies responsible charge, that the engineer has directed the work, understands its content, and can stand behind its technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter. Public welfare is paramount (I.1), and safety-critical omissions in design documents that could reach construction represent a failure of the core public protection function of licensure. The standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool.

Rebuttals

Uncertainty arises because responsible charge doctrine has historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, some argue that tool familiarity is not independently required. Additionally, no settled professional standard at the time of the engagement explicitly specified what review depth is required for AI-generated design documents, leaving open whether a high-level review by a domain-competent engineer could satisfy the standard for lower-complexity elements.

Grounds

Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review before affixing a professional seal. Client W subsequently discovered misaligned dimensions and omitted safety features required by local regulations, defects that Engineer A's review failed to catch.

When Client W observed that the report appeared written by two different authors, should Engineer A disclose that AI software drafted the more polished sections, or respond in a way that affirms professional responsibility without identifying the AI's specific role?

Options:
Disclose AI-Drafted Sections To Client Board's choice Respond to Client W's stylistic observation by proactively disclosing that AI software generated the more polished sections of the report, identifying which sections were AI-drafted and which were independently authored, and confirming that all AI-generated content was factually verified against professional journal articles. This approach treats Client W's direct observation as a circumstance where continued silence would sustain a materially false impression in violation of Code provisions I.5 and III.3.
Affirm Report Reflects Professional Judgment Treat the AI drafting tool as internal professional software equivalent to other engineering tools, and respond to Client W's observation by affirming that the report reflects Engineer A's professional judgment and bears Engineer A's seal as the responsible party, without identifying the AI's generative role. This approach holds that the professional seal is the operative representation of authorship responsibility and that no additional tool-level disclosure is required.
Acknowledge Automated Assistance Without Specifics Acknowledge to Client W that the report was produced using a combination of automated drafting assistance and independent professional review, without specifying the AI tool by name or identifying which sections were AI-generated. This partial acknowledgment responds to the client's observation without full transparency, risking an incomplete disclosure that may still leave Client W with a false impression about the nature and extent of AI involvement.
Toulmin Summary:
Warrants I.5 III.3 III.9

Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients. Deception does not require an affirmative false statement, deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission sustains a materially false impression constitutes a deceptive act. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify: a client who is told their report reads as if written by two people is, in practical terms, asking why. The professional seal implicitly represents intellectual authorship and responsible charge over the work's expression, not merely quality certification. Code provision III.9's credit-giving obligation extends to the intellectual and evidentiary origins of professional work product.

Rebuttals

Uncertainty is created by the board's own conclusion that there is no universal ethical obligation to disclose AI tool use, analogizing AI to other engineering software. The duty of candor may not extend to disclosure of every tool or method used in professional practice, engineers are not obligated to disclose which CAD software they use. Additionally, if Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professional work product, the authorship representation may be defensible. The rebuttal condition, whether review thoroughness converts AI-generated text into engineer-authored work, remains professionally unsettled.

Grounds

Engineer A used AI software to draft the environmental report, then personalized the AI-generated prose with minor wording adjustments and submitted the report under a professional seal without disclosing the AI's role. Client W observed that the report read as if written by two different authors, an observation that was factually accurate given the report's dual-origin nature. Engineer A did not respond to this observation by disclosing the AI's generative contribution. The report's factual content had been thoroughly verified by Engineer A against professional journal articles, though those sources were not cited.

Before uploading Client W's confidential site data to an open-source AI platform, should Engineer A investigate the platform's data handling practices and obtain Client W's explicit consent, proceed under the existing engagement agreement, or use only anonymized data in the AI tool?

Options:
Investigate Platform And Obtain Informed Consent Board's choice Investigate the open-source AI platform's data handling, retention, and third-party access policies before use, disclose the intended use of the platform to Client W, and obtain explicit informed consent before uploading any proprietary site data or groundwater monitoring information. This approach treats the upload of confidential client data to a novel third-party platform as a pre-use obligation under Code provision II.1.c, independent of the quality of the resulting work product.
Proceed Under Existing Engagement Agreement Treat the open-source AI platform as functionally equivalent to other third-party engineering software tools used in professional practice, and proceed with uploading client data under the existing engagement agreement without seeking additional consent. This approach holds that uploading data to an AI tool for instrumental work-product generation does not constitute disclosure to an identifiable third party within the meaning of Code provision II.1.c.
Use Anonymized Data In AI Tool Inputs Use the AI drafting tool only with anonymized or de-identified versions of the client data, substituting generic site parameters for proprietary monitoring values in the AI inputs, while retaining the actual confidential data for Engineer A's independent verification and final report preparation. This option attempts to mitigate confidentiality risk without seeking client consent, but does not resolve the underlying obligation to investigate the platform's data handling practices before any use.
Toulmin Summary:
Warrants II.1.c I.2 II.2.a

Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on the accuracy or quality of the resulting work product. A competent engineer deploying any novel third-party platform with client data bears a pre-use obligation to investigate data handling and privacy policies and to obtain explicit client consent if confidential information will be transmitted to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether misuse occurs. The confidentiality obligation is not remediated by the thoroughness of subsequent review, the accuracy of the final work product, or any disclosure decision regarding AI authorship. This violation stands as a separate and self-contained ethical breach.

Rebuttals

Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure to a third party' within the meaning of II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable human third party. Additionally, if the open-source platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk insufficient to constitute a breach. The confidentiality obligation might also be partially rebutted if Engineer A had obtained Client W's informed consent, or if the engagement contract authorized use of third-party software tools without specifying consent requirements.

Grounds

Engineer A uploaded Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining Client W's prior consent. Engineer A was self-admittedly unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets. Engineer B's retirement had removed Engineer A's primary quality assurance mechanism, creating professional pressure to use AI assistance for a complex dual-scope engagement.

Should Engineer A conduct a rigorous line-by-line technical review of all AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventional drafting tools, or engage a qualified peer reviewer to verify safety-critical elements?

Options:
Conduct Rigorous Line-By-Line Technical Review Board's choice Conduct a rigorous, line-by-line technical review of all AI-generated design document outputs: checking each dimension against site survey data, verifying each specification against local regulatory requirements, before affixing the professional seal, on the basis that unfamiliarity with the tool's failure modes demands heightened scrutiny.
Apply Standard QA Protocol to AI Outputs Apply the firm's standard QA protocol to AI-generated design documents at the same review depth used for conventional CAD-produced drawings, treating the AI tool as an equivalent drafting instrument and relying on domain competence rather than tool-specific familiarity to satisfy responsible charge.
Engage Peer Reviewer for Critical AI Elements Engage a qualified peer reviewer or subconsultant to independently verify safety-critical and regulatory compliance elements of the AI-generated design documents while applying standard review to non-critical sections, compensating for unfamiliarity with the tool's limitations through external expertise.
Toulmin Summary:
Warrants I.1 I.2 II.2.a II.2.b

The professional seal legally and ethically certifies that the engineer has exercised responsible charge, that they understood, directed, and can stand behind the work's technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter: an engineer using a novel AI tool whose generative logic they do not fully understand must apply verification rigor proportionate to that epistemic gap. The public safety paramount obligation (I.1) functions as a non-negotiable constraint: safety-critical omissions that could reach construction represent a failure of the core public protection function of licensure. Competing against these is the argument that responsible charge doctrine has historically focused on review outcomes rather than process comprehension, if outputs are technically adequate, the review method may be immaterial.

Rebuttals

Uncertainty arises from the absence of a defined professional standard specifying what constitutes 'sufficient' review of AI-generated design documents. A rebuttal condition holds that if the safety omissions and dimensional errors were of a type detectable through standard domain-competent review, which Engineer A possessed in groundwater infrastructure, then the failure was one of review thoroughness rather than tool incompetence, and a more rigorous application of standard review protocols might have satisfied responsible charge without requiring specialized AI expertise. Additionally, the analogy to conventional CAD software creates uncertainty: if AI drafting tools are treated as instrumentally equivalent to other design software, the review standard applicable to CAD outputs might be argued to apply equally here.

Grounds

Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review of those documents before affixing their professional seal. Client W subsequently identified misaligned dimensions and omitted safety features required by local regulations: defects that Engineer A's review had not caught. Engineer B, who had previously provided quality assurance review, had retired before the engagement began.

Should Engineer A proactively disclose the AI tool's generative role in response to Client W's authorship observation, or address the concern through explanation or revision without specifically disclosing AI involvement?

Options:
Disclose AI Role and Cite Sources Board's choice Proactively disclose the AI tool's generative role in response to Client W's authorship observation, identify which sections were AI-generated and which were independently authored, and add citations to the professional sources used in verification. This treats Client W's observation as triggering an immediate candor obligation under Code provisions I.5, III.3, and III.9.
Explain Report Reflects Professional Verification Treat the AI tool as internal drafting software equivalent to other professional writing aids, and respond to Client W's stylistic observation by explaining that the report reflects thorough factual verification and professional judgment. This approach does not specifically disclose AI involvement, framing the tool as a routine drafting aid rather than a generative author.
Revise Prose Without Disclosing AI Involvement Acknowledge Client W's observation by offering to revise the report's prose into a consistent single-author voice through additional editing, without specifically disclosing AI involvement. This treats the concern as a stylistic rather than an authorship or transparency issue.
Toulmin Summary:
Warrants I.5 III.3 III.9

Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients: deception does not require an affirmative false statement but can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Client W's direct observation about stylistic inconsistency constituted an implicit inquiry about authorship that created a discrete, time-specific obligation to clarify. Code provision III.9's credit-giving obligation extends to the intellectual and evidentiary sources substantiating technical conclusions, including AI-generated prose and uncited journal articles used for verification. Competing against these is the board's general conclusion that no universal disclosure obligation exists absent contractual requirement, and that the professional seal and responsible charge, not authorship attribution, are the operative accountability mechanisms in engineering.

Rebuttals

Uncertainty is generated by the absence of an explicit NSPE Code provision mandating AI tool disclosure at the time of the engagement, leaving the obligation to be derived inferentially from general candor and non-deception provisions. A rebuttal condition holds that the duty of candor may not extend to disclosure of every tool or method used in professional practice: engineers are not obligated to disclose use of word processors, spreadsheet software, or other drafting aids, and if Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professionally verified work product, the authorship representation implicit in the seal may be defensible. Additionally, the virtue ethics rebuttal notes that engineers routinely rely on drafting assistance without attribution, and the novelty of AI as a drafting tool may not yet carry settled professional norms distinguishing it from other forms of professional assistance.

Grounds

Engineer A used open-source AI software to draft the environmental report for Client W, then conducted a thorough factual review, cross-checking AI-generated content against professional journal articles, before submitting the report under their professional seal without any disclosure of AI involvement. Client W observed that the report read as if written by two different authors, a stylistically accurate description of the report's dual-origin nature. Engineer A did not respond to this observation by disclosing the AI's role. The report contained no citations to the journal articles used for cross-checking and no attribution of AI-generated sections.

After Engineer B's retirement removed Engineer A's primary quality assurance mechanism, did Engineer A have an independent ethical obligation to arrange a functionally equivalent alternative peer review process before undertaking a complex dual-scope engagement, and did the decision to substitute an open-source AI tool for that oversight independently violate the client data confidentiality obligation by necessarily exposing Client W's proprietary site data to a public platform without prior consent?

Options:
Arrange Alternative Peer Reviewer Before Engaging Board's choice Before accepting the dual-scope engagement, arrange an alternative qualified peer reviewer or subconsultant to provide quality assurance review, obtain Client W's informed consent before selecting any AI tool that would require uploading confidential site data, and identify a privacy-compliant AI alternative or proceed without AI assistance if consent is withheld
Proceed Relying on Personal Domain Expertise Proceed with the engagement relying on personal domain expertise as the primary quality assurance mechanism, use the open-source AI tool as an internal drafting aid treating data upload as instrumentally equivalent to using any cloud-based software service, and disclose AI tool use and data handling practices to Client W only if directly asked or if the contract requires it
Limit Scope to Verified Solo Capabilities Scope the engagement to match verified solo capabilities by limiting AI tool use to non-confidential, publicly available reference data for drafting assistance while conducting all site-specific analysis and document generation manually, deferring the design document component until a qualified peer reviewer can be engaged
Toulmin Summary:
Warrants I.2 II.1.c II.2.a

Code provisions I.2 and II.2.a require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. When an established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before accepting complex, high-stakes work. AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility. Separately and independently, Code provision II.1.c imposes an absolute confidentiality obligation: uploading Client W's proprietary site data to an open-source platform without prior consent exposed that information to potential third-party access, retention, or reuse that Engineer A could not control, a breach that stands entirely apart from questions of report quality or AI disclosure. The structural conflict between these two obligations, the need for quality assurance and the confidentiality constraint on the only available compensating mechanism, was resolvable only through proactive planning before the engagement began.

Rebuttals

Uncertainty arises from the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the succession obligation to be derived from general competence and public welfare provisions. A rebuttal condition holds that if Engineer A's own domain expertise was sufficient to independently verify the work product, and Engineer A did possess genuine competence in groundwater infrastructure and environmental assessment, the absence of a peer reviewer might not independently constitute an ethical violation, provided the engineer's own review was sufficiently rigorous. On the confidentiality question, uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of II.1.c, since the data was used instrumentally rather than shared with an identifiable recipient, and if the platform's data handling practices were such that uploaded data was provably isolated and not retained, a consequentialist analysis might not find foreseeable harm.

Grounds

Engineer B, who had served as Engineer A's primary mentor and quality assurance reviewer, retired before the Client W engagement began. Engineer A then accepted a complex dual-scope engagement, a comprehensive environmental contaminant characterization report and engineering design documents for infrastructure modifications, without arranging alternative peer review. Engineer A chose to use a newly released, open-source AI tool with no prior experience, uploading Client W's confidential site data and groundwater monitoring information into the public platform without obtaining Client W's prior consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling and retention practices.

Should Engineer A perform a rigorous, element-by-element technical review of AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventionally drafted documents, or engage a third-party reviewer with AI-specific experience to verify safety-critical elements?

Options:
Perform Rigorous Line-By-Line Technical Review Board's choice Conduct a comprehensive, line-by-line review of all AI-generated design documents, verifying each dimension against site survey data and each specification against local regulatory requirements, before affixing the professional seal, treating unfamiliarity with the AI tool as a factor requiring heightened personal scrutiny.
Apply Standard QA Protocol As-Is Apply the firm's existing QA protocol for conventionally drafted design documents to the AI-generated outputs at the same review depth, on the basis that the engineer's domain competence, not tool familiarity, is the operative standard for responsible charge.
Engage Third-Party AI-Experienced Reviewer Engage a qualified third-party reviewer with demonstrated experience evaluating AI-generated engineering outputs to independently verify safety-critical and regulatory compliance elements before the seal is affixed, compensating for Engineer A's lack of tool-specific expertise through external verification.
Toulmin Summary:
Warrants I.1 I.2 II.2.a II.2.b

The professional seal legally and ethically certifies that the engineer has exercised Responsible Charge, that they understood, directed, and can stand behind the work's technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter: an engineer using a novel AI tool whose generative logic they do not fully understand must apply verification rigor proportionate to that epistemic gap. The public safety paramount obligation (I.1) functions as a non-negotiable constraint: safety-critical omissions in design documents that could reach construction represent a failure of the core public protection function of licensure. The Engineering Judgment Non-Substitution obligation holds that AI-generated outputs cannot substitute for the engineer's own professional judgment over safety-critical elements.

Rebuttals

Uncertainty arises because Responsible Charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, some argue the generative mechanism is irrelevant. Additionally, no settled professional standard at the time of the engagement explicitly defined what depth of review of AI-generated design documents was required to satisfy Responsible Charge, leaving open whether a high-level review by a domain-competent engineer might suffice for lower-risk elements. A further rebuttal holds that the harm was contingent on the cursory review, not inherent to AI tool use, meaning the tool adoption itself was not unethical, only the review depth was.

Grounds

Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review of those documents before affixing their professional seal and submitting them. Client W subsequently identified misaligned dimensions and omitted safety features required by local regulations: defects that Engineer A's review had not detected. Engineer B, who had previously provided quality assurance review, had retired before this engagement.

After Engineer B's retirement eliminated Engineer A's primary QA resource, should Engineer A arrange a functionally equivalent peer reviewer before proceeding with the Client W engagement, proceed relying on personal domain competence, or disclose the QA gap to Client W and propose a reduced scope?

Options:
Arrange Alternative Qualified Peer Reviewer Board's choice Arrange an alternative qualified peer reviewer: such as a trusted colleague, a professional review service, or a subconsultant with relevant expertise, to provide quality assurance review of work products before submission to Client W, treating the loss of Engineer B's oversight as a competence infrastructure gap that must be closed before undertaking a safety-critical, dual-scope engagement involving an unfamiliar AI tool. This approach derives the peer review obligation inferentially from Code provisions I.2 and II.2.a's requirement that engineers undertake only assignments for which they are qualified.
Proceed Relying On Own Domain Competence Proceed with the engagement relying on Engineer A's own domain competence in groundwater infrastructure as the primary quality assurance mechanism, treating the loss of Engineer B's review as an operational change rather than a disqualifying competence gap. This approach holds that no explicit NSPE Code provision mandates peer review as a precondition to practice, and that Engineer A's professional judgment and technical expertise are sufficient to satisfy the responsible charge standard.
Disclose QA Change And Propose Reduced Scope Disclose to Client W that Engineer B's retirement has altered the firm's quality assurance structure, and propose a reduced or phased scope limited to work products within Engineer A's independently verifiable competence, deferring or subcontracting the AI-assisted components until adequate oversight infrastructure is in place. This option prioritizes transparency and client protection over continuity of the engagement, but may not be necessary if a peer reviewer can be arranged.
Toulmin Summary:
Warrants I.2 II.2.a

Code provisions I.2 and II.2.a require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. An engineer who knows they have a recognized weakness in a critical deliverable component, who has lost their primary quality assurance resource, and who then deploys an untested tool as a replacement without independent verification of that tool's reliability has not satisfied the competence standard. AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. The substitution also compounded the ethical problem by requiring upload of confidential client data to an open-source platform. The virtue of prudence requires accurate self-assessment of limitations and deliberate compensatory measures when those limits are approached.

Rebuttals

Uncertainty is created by the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public welfare provisions. If the AI tool were sufficiently mature, well-documented, and its outputs independently verifiable by Engineer A's existing domain expertise, the novelty of the tool alone might not establish an ethical obligation to seek alternative oversight. Additionally, if no qualified peer reviewer was reasonably accessible within the project timeline and budget, the obligation to arrange alternative review would be rebutted by practical impossibility, and the engineer's domain competence might be argued sufficient to satisfy the competence standard independently.

Grounds

Engineer B had served as Engineer A's primary quality assurance resource, providing peer review and mentorship that was integral to Engineer A's professional practice. When Engineer B retired before the Client W engagement, Engineer A lost that oversight mechanism. Engineer A then accepted a dual-scope engagement, a comprehensive contaminant characterization report and engineering design documents for infrastructure modifications, and chose to deploy a newly released, open-source AI drafting tool with no prior experience as a substitute for that professional oversight. Engineer A self-acknowledged a recognized weakness in technical writing. The resulting design documents contained misaligned dimensions and omitted safety features that Engineer A's cursory review did not detect.

Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's explicit consent before uploading confidential site data, or may Engineer A proceed by anonymizing inputs or treating the platform as equivalent to standard third-party engineering software?

Options:
Investigate Platform and Obtain Client Consent Board's choice Investigate the open-source AI platform's data handling, retention, and third-party access policies before use, disclose the intended use to Client W, and obtain explicit written consent for uploading confidential site data. This treats the consent obligation as affirmative and non-contingent under Code provision II.1.c.
Use Anonymized Data for AI Assistance Use the open-source AI platform with anonymized or generalized site data, substituting non-identifying parameters for proprietary monitoring values, so that the AI tool can assist with report structure without exposing Client W's confidential information. This treats technical de-identification as a sufficient substitute for prior client consent.
Treat AI Platform as Standard Third-Party Software Treat the open-source AI platform as functionally equivalent to other third-party engineering software routinely used in practice, such as cloud-based CAD or analysis platforms, and proceed with data upload without seeking separate client consent. This approach holds that no distinct confidentiality obligation is triggered beyond what applies to ordinary professional software tools.
Toulmin Summary:
Warrants II.1.c

Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on the quality or accuracy of the resulting work product. A competent engineer deploying any novel third-party software tool, particularly an open-source platform with unknown data handling practices, bears an affirmative pre-use duty to investigate how that system will handle client data before transmitting it, and to obtain explicit client consent if confidential information will be exposed to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. This violation stands entirely apart from questions about report quality, AI disclosure, or design document accuracy and is not remediated by the thoroughness of subsequent review. From a consequentialist perspective, the foreseeable risk of harm to Client W's proprietary interests, regulatory exposure, competitive harm, litigation risk, outweighs the drafting efficiency gained, and that risk calculus should have been apparent to a competent engineer before acting.

Rebuttals

Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third-party recipient. If the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the risk sufficiently low to be outweighed by the efficiency benefit. Additionally, if Engineer A had obtained Client W's informed consent to use the AI platform, even implicitly through a broad project authorization, the confidentiality breach would be rebutted.

Grounds

Engineer A uploaded Client W's proprietary site data and groundwater monitoring information, information with potential regulatory, litigation, and competitive sensitivity, into an open-source, publicly accessible AI platform without first obtaining Client W's knowledge or consent. Engineer A was self-admittedly unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating risks of disclosure beyond Engineer A's control.

Given that Engineer B's retirement removed Engineer A's primary quality assurance mechanism and that Engineer A had no prior experience with the AI drafting tool, should Engineer A perform a rigorous line-by-line technical review before sealing, apply the standard QA protocol as-is, or engage an independent peer reviewer to verify safety-critical elements?

Options:
Perform Rigorous Line-By-Line Technical Review Board's choice Conduct a rigorous, line-by-line technical review of all AI-generated design documents: verifying each dimension against site survey data, each specification against local regulatory requirements, before sealing, treating the combined loss of peer review and unfamiliarity with the AI tool as factors requiring the highest level of personal scrutiny.
Apply Standard QA Protocol As-Is Apply the firm's standard QA protocols to AI-generated design documents at the same review intensity used for conventionally drafted CAD outputs, treating the AI tool as an advanced drafting aid whose outputs are adequately evaluated through existing domain-competent review procedures.
Engage Independent Peer Reviewer Before Sealing Engage a qualified peer reviewer or subconsultant to independently verify safety-critical and regulatory compliance elements of the AI-generated design documents before sealing, restoring the quality assurance function lost with Engineer B's retirement and compensating for Engineer A's lack of tool-specific expertise.
Toulmin Summary:
Warrants I.1 I.2 II.2.a II.2.b

Responsible Charge requires the engineer to have directed, understood, and be able to certify the technical adequacy of sealed work (II.2.b). Competence obligations extend to the tools deployed, not merely the subject matter (I.2, II.2.a). The standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool. When an established quality assurance mechanism is lost, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, safety-critical engagements. Public welfare is paramount (I.1) and cannot be subordinated to efficiency gains from novel tool adoption.

Rebuttals

Responsible charge doctrine has historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, unfamiliarity with the tool's internal logic might not independently constitute a breach. Additionally, no explicit NSPE Code provision mandates peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public welfare provisions. A high-level review by a domain-competent engineer might be argued sufficient if the AI tool's outputs were of a type amenable to rapid expert verification.

Grounds

Engineer B retired, removing the primary peer review mechanism Engineer A had relied upon. Engineer A then accepted a dual-scope engagement and deployed a novel, newly released open-source AI drafting tool with no prior experience. Engineer A conducted only a cursory, high-level review of the AI-generated design documents before affixing a professional seal. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations, defects not caught by Engineer A's review but identified by Client W independently.

When Client W directly observed that the report appeared to have been written by two different authors, accurately identifying its dual-origin nature, should Engineer A disclose the AI tool's generative role, deflect with a technical explanation, or offer revision without attribution?

Options:
Disclose AI Role Upon Client Observation Board's choice Disclose the AI tool's generative role to Client W at the moment of the stylistic inconsistency observation, identifying which sections were AI-drafted and which were independently authored, and providing context for the factual review process conducted before sealing.
Explain Stylistic Variation as Technical Density Respond to Client W's stylistic observation by explaining that the report reflects different levels of technical density across sections, treating the AI tool as an internal drafting aid equivalent to other software tools and not requiring specific disclosure.
Offer Prose Revision Without Disclosing AI Acknowledge Client W's observation and offer to revise the report's prose for stylistic consistency before final submission, without specifically attributing the inconsistency to AI generation, on the basis that the thorough factual review sufficiently transformed the draft into Engineer A's own professional work product.
Toulmin Summary:
Warrants I.5 III.3 III.9 II.2.b

Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception does not require an affirmative false statement but can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Client W's direct observation about authorial inconsistency constituted an implicit inquiry about the report's provenance, creating a discrete, time-specific obligation to clarify. Code provision III.9 requires giving credit for engineering work to those to whom credit is due, which extends to the intellectual and evidentiary sources, including AI-generated prose and uncited journal articles, that substantiate technical conclusions. The professional seal implicitly represents intellectual authorship and responsible charge over the work's expression, not merely its factual accuracy.

Rebuttals

The duty of candor may not extend to disclosure of every tool or method used in professional practice: engineers are not obligated to disclose use of CAD software, finite element analysis tools, or other drafting aids. If Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professional work product, the authorship representation may be defensible. No settled professional standard at the time of the engagement explicitly defined the threshold of review depth required to convert AI-generated text into engineer-authored work. Code provision III.9's credit obligation may apply only when another engineer's or author's work is directly incorporated, not when AI-generated synthesis is independently verified and corrected.

Grounds

Engineer A used an open-source AI tool to draft the environmental report, then conducted a thorough factual review, cross-checking AI-generated claims against professional journal articles, before sealing and submitting the report without any disclosure of AI involvement. The report exhibited a stylistic inconsistency that Client W independently detected, observing that it appeared written by two different authors. This observation was factually accurate: AI-generated prose tends toward uniform polish that differs from Engineer A's more variable human writing style. Engineer A did not respond to Client W's observation by disclosing the AI's role. The report also omitted citations to the journal articles used for cross-checking.

Should Engineer A obtain Client W's explicit prior consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed by anonymizing the data or limiting inputs to publicly available information?

Options:
Obtain Explicit Prior Client Consent Board's choice Obtain Client W's explicit prior consent before uploading any confidential site data to the open-source AI platform, after disclosing the platform's data handling characteristics and the foreseeable risks of third-party exposure. This treats the consent obligation as affirmative, non-contingent, and independent of work product quality under Code provision II.1.c.
Anonymize Data Before Platform Upload Anonymize or aggregate the site data before uploading to the AI platform: removing client-identifying information, location coordinates, and proprietary monitoring parameters, treating de-identification as a sufficient confidentiality safeguard that eliminates the need for prior client consent.
Input Only Publicly Available Data to Platform Use the open-source AI platform only for drafting prose structure and generic technical language, inputting only publicly available regulatory standards and general site-type parameters rather than Client W's proprietary data. This approach avoids the consent question by excluding confidential information from the platform entirely.
Toulmin Summary:
Warrants II.1.c

Code provision II.1.c imposes an affirmative, non-contingent obligation to protect client confidentiality that precedes and is independent of questions about work product quality or AI disclosure. A competent engineer deploying any novel third-party platform with client data bears an independent obligation to investigate the data handling, storage, and privacy policies of that tool before use, and to obtain explicit client consent if confidential information will be transmitted to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether actual misuse occurs. The loss of Engineer B's peer review created professional pressure to use AI as a compensating mechanism, but the only available open-source tool necessarily exposed confidential data, creating a structural conflict resolvable only by proactive planning before engagement acceptance.

Rebuttals

Uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third party for their benefit. If the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk insufficient to constitute a breach. The confidentiality obligation might also be partially rebutted if Engineer A had obtained Client W's informed consent to use the AI platform, or if the data uploaded was sufficiently anonymized or aggregated to prevent identification.

Grounds

Engineer A uploaded Client W's confidential site data and groundwater monitoring information: proprietary environmental data with potential regulatory, litigation, and competitive sensitivity, into an open-source AI platform without obtaining Client W's prior consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating foreseeable risks of disclosure beyond Engineer A's control.

13 sequenced 6 actions 7 events
Action (volitional) Event (occurrence) Associated decision points
DP2
Engineer A's obligation to conduct a substantively adequate review of AI-generat...
Conduct Rigorous Line-By-Line Technical ... Apply Standard QA Protocol For AI Output... Engage Independent Peer Reviewer For Ver...
Full argument
DP6
Engineer A: Responsible Charge and Competence Verification Obligation for AI-Gen...
Perform Rigorous Independent Line-By-Lin... Apply Standard QA Protocol As-Is Engage Third-Party AI-Experienced Review...
Full argument
DP12
Engineer A: Client Data Confidentiality and Peer Review Succession Obligation Fo...
Arrange Alternative Peer Reviewer Before... Proceed Relying on Personal Domain Exper... Limit Scope to Verified Solo Capabilitie...
Full argument
DP14
Engineer A: Peer Review Succession and Competence Infrastructure Obligation Foll...
Arrange Alternative Qualified Peer Revie... Proceed Relying On Own Domain Competence Disclose QA Change And Propose Reduced S...
Full argument
DP16
Engineer A: Review Depth and Competence Obligation for AI-Generated Design Docum...
Perform Rigorous Line-By-Line Technical ... Apply Standard QA Protocol As-Is Engage Independent Peer Reviewer Before ...
Full argument
DP1
Engineer A's obligation to disclose AI-generated authorship to Client W upon sub...
Proactively Disclose AI Role To Client Treat AI As Internal Productivity Tool Acknowledge Automated Assistance Without...
Full argument
DP3
Engineer A's obligation to obtain Client W's prior informed consent before uploa...
Investigate Platform and Obtain Client C... Anonymize Data Before Uploading to Platf... Substitute Privacy-Compliant Enterprise ...
Full argument
DP4
Engineer A: Proactive AI Disclosure and Authorship Transparency Obligation Towar...
Disclose AI Authorship Fully and Immedia... Treat AI as Internal Tool, Omit Disclosu... Add General Methodology Note, Disclose O...
Full argument
DP5
Engineer A: Client Consent for Third-Party Data Sharing - Uploading Confidential...
Investigate Platform and Obtain Written ... Anonymize Data as Confidentiality Safegu... Proceed Treating AI as Local Software Eq...
Full argument
DP8
Engineer A: Disclosure of AI Authorship to Client When Stylistic Anomaly Is Dire...
Disclose AI-Drafted Sections To Client Affirm Report Reflects Professional Judg... Acknowledge Automated Assistance Without...
Full argument
DP9
Engineer A: Obtaining Client Consent Before Uploading Confidential Site Data to ...
Investigate Platform And Obtain Informed... Proceed Under Existing Engagement Agreem... Use Anonymized Data In AI Tool Inputs
Full argument
DP11
Engineer A: AI Tool Disclosure and Authorship Attribution Obligation Toward Clie...
Disclose AI Role and Cite Sources Explain Report Reflects Professional Ver... Revise Prose Without Disclosing AI Invol...
Full argument
DP15
Engineer A: Client Data Confidentiality Obligation in AI Tool Use - Uploading Co...
Investigate Platform and Obtain Client C... Use Anonymized Data for AI Assistance Treat AI Platform as Standard Third-Part...
Full argument
DP17
Engineer A: Disclosure Obligation and Authorship Integrity When Client Directly ...
Disclose AI Role Upon Client Observation Explain Stylistic Variation as Technical... Offer Prose Revision Without Disclosing ...
Full argument
DP18
Engineer A: Client Data Confidentiality Obligation When Uploading Proprietary Si...
Obtain Explicit Prior Client Consent Anonymize Data Before Platform Upload Input Only Publicly Available Data to Pl...
Full argument
3 Input Confidential Data into Public AI Report drafting phase
4 Conducted Thorough Report Review Report review phase, prior to report submission
5 Submitted Report Without AI Disclosure Report submission phase
DP7
Engineer A: Depth of Review Required Before Sealing AI-Generated Design Document...
Conduct Rigorous Line-by-Line Technical ... Apply Standard QA Protocol to AI Outputs Engage Third-Party Reviewer for Critical...
Full argument
DP10
Engineer A: Responsible Charge and Competence Verification Obligation for AI-Gen...
Conduct Rigorous Line-By-Line Technical ... Apply Standard QA Protocol to AI Outputs Engage Peer Reviewer for Critical AI Ele...
Full argument
DP13
Engineer A: Responsible Charge and Competence Verification Obligation for AI-Gen...
Perform Rigorous Line-By-Line Technical ... Apply Standard QA Protocol As-Is Engage Third-Party AI-Experienced Review...
Full argument
7 Conducted Cursory Design Document Review Design document review phase, prior to submission to Client W
8 Client W Engagement Established Beginning of the project timeline; after Engineer B's retirement
9 Confidential Data Exposed to AI During report drafting phase; after engagement established and AI tool selected
10 AI Report Draft Generated During report drafting phase; immediately following data input action
11 AI Design Documents Generated During design document phase; after or concurrent with report drafting
12 Report Stylistic Inconsistency Detected After report submission; during Client W's review phase
13 Design Document Defects Discovered After design document submission; during Client W's review phase
Causal Flow
  • Chose AI for Report Drafting Input Confidential Data into Public AI
  • Input Confidential Data into Public AI Conducted Thorough Report Review
  • Conducted Thorough Report Review Submitted Report Without AI Disclosure
  • Submitted Report Without AI Disclosure Used AI for Design Document Generation
  • Used AI for Design Document Generation Conducted Cursory Design Document Review
  • Conducted Cursory Design Document Review Engineer B Retirement Occurs
Opening Context
View Extraction

You are Engineer A, a licensed environmental engineering consultant retained by Client W to prepare two deliverables: a comprehensive environmental report on an organic contaminant of concern, and engineering design documents for groundwater infrastructure modifications at the same site. Your mentor and longtime quality-assurance reviewer, Engineer B, has recently retired. Without that support, and facing deadline pressure, you have turned to a newly released open-source AI tool to assist with both deliverables. You have no prior experience with this tool, and the platform requires you to upload project data to generate drafts. Client W has not been informed of any of this. The report draft and the preliminary design documents are now ready. How you review, seal, disclose, and deliver these work products will determine whether you meet your professional obligations or fall short of them.

From the perspective of Engineer A Environmental Engineering Consultant
Characters (8)
protagonist

A licensed professional engineer retained by Client W to prepare a comprehensive environmental report and develop engineering design documents for groundwater infrastructure modifications. Used AI software tools to assist with drafting deliverables but conducted only cursory review before affixing professional seal, resulting in quality deficiencies identified by the client.

Motivations:
  • Likely motivated by efficiency and workload management following the loss of mentorship support, prioritizing timely deliverable submission over rigorous professional review and transparency obligations.
  • Likely motivated by overconfidence in AI-generated outputs and time pressure, leading to an underestimation of the verification rigor required before affixing a professional seal to design documents.
  • Professional obligation to maintain responsible charge and active engagement in the engineering process from conception to completion.
protagonist

Developed engineering design documents including plans and specifications for groundwater infrastructure modifications using AI-assisted drafting tools; conducted only cursory review resulting in misaligned dimensions and omission of required safety features

stakeholder

A recently retired senior engineer who previously provided essential supervisory oversight and quality assurance that helped maintain Engineer A's professional standards.

Motivations:
  • Motivated by a genuine commitment to professional mentorship during active practice, though retirement inadvertently created a critical accountability gap that Engineer A failed to compensate for through alternative oversight measures.
stakeholder

Retained Engineer A for environmental contaminant reporting and groundwater infrastructure design; reviewed deliverables, identified quality inconsistencies in the report and critical deficiencies in the design documents, and instructed Engineer A to revise plans to meet professional and regulatory standards

protagonist

Used AI language processing software to draft an environmental groundwater monitoring report and AI-assisted drafting tools to prepare design documents; performed insufficient review of AI-generated design outputs resulting in misaligned dimensions and omitted safety features; uploaded client confidential information to a public AI interface without client consent; failed to include appropriate citations for AI-generated content.

protagonist

Bore statutory responsible charge obligations over the groundwater monitoring report and design documents; failed to maintain active engagement in the design and development process by relying on AI-generated plans without comprehensive verification; did not satisfy responsible charge requirements by conducting only a high-level post-preparation review.

stakeholder

Retained Engineer A for environmental consulting and design services; reviewed AI-assisted design documents and identified misaligned dimensions and omitted safety features; questioned inconsistencies in the report; held confidentiality interests in information uploaded to public AI systems without consent.

stakeholder

Senior engineer whose absence from the project left Engineer A without proper oversight and mentorship support, contributing to Engineer A operating in a compromised manner and relying excessively on AI-generated outputs without adequate verification.

Ethical Tensions (16)

Tension between AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W and AI-Generated Work Product Disclosure Constraint Engineer A Report Submission

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: medium Probability: high near-term direct concentrated

Tension between AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W and Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: medium Probability: medium near-term direct concentrated

Tension between Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A and Confidential Client Data Input Constraint Engineer A Open-Source AI Upload

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high immediate direct concentrated

Tension between Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission / AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents and Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high near-term direct concentrated

Tension between Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents and AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: medium Probability: high near-term direct concentrated

Tension between Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents / Client Data Confidentiality in AI Tool Use Violated by Engineer A / Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement and Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high immediate direct diffuse

Tension between AI-Generated Work Product Competence Verification Obligation and Regulatory Compliance Verification Obligation

Obligation Vs Constraint
Affects: Engineer B Mentor Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: medium near-term indirect diffuse

Tension between Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents and Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high near-term direct concentrated

Tension between AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase and Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: medium near-term direct concentrated

Tension between Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement and Client Data Confidentiality in AI Tool Use Violated by Engineer A

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: medium Probability: medium near-term indirect diffuse

Tension between Responsible Charge Active Review Obligation — differentially met for report (thorough) and violated for design documents (cursory) and AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high near-term direct concentrated

Tension between Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool and Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: medium near-term direct concentrated

Tension between Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement and Client Data Confidentiality in AI Tool Use Violated by Engineer A

Obligation Vs Constraint
Affects: Engineer
Moral Intensity (Jones 1991):
Magnitude: medium Probability: medium near-term indirect diffuse

Engineer A is obligated to comprehensively verify all AI-assisted design outputs to ensure technical accuracy and safety, yet the retirement of Engineer B (the mentor) has eliminated the peer review mechanism that would normally serve as a critical backstop for that verification. Fulfilling the verification obligation now falls entirely on Engineer A alone, but the structural constraint — the absence of a peer reviewer — makes robust, independent verification practically impossible without additional compensating measures Engineer A has not implemented. This creates a genuine dilemma: the obligation demands a standard of verification that the post-retirement environment structurally prevents from being met, and any shortfall directly threatens public safety in groundwater infrastructure design.

Obligation Vs Constraint
Affects: Engineer A Groundwater Infrastructure Design Engineer Engineer in Responsible Charge Client W Environmental Engineering Client Engineer B Mentor Engineer
Moral Intensity (Jones 1991):
Magnitude: high Probability: high near-term direct concentrated

Engineer A bears a positive obligation to represent the true intellectual authorship of submitted work products honestly, including acknowledging AI-generated content. Simultaneously, the non-deception constraint prohibits Engineer A from misrepresenting authorship in any form. These two entities are not merely redundant — they create a dilemma when Engineer A's professional self-interest, efficiency pressures, and the absence of explicit firm or regulatory policy on AI attribution create situational incentives to allow the client to assume full human authorship. The tension is between the active duty to disclose and the passive temptation to omit, where omission itself constitutes deception. The breach already identified in the case confirms that Engineer A resolved this tension in the ethically impermissible direction, underscoring the real pull of competing pressures.

Obligation Vs Constraint
Affects: Engineer A Environmental Engineering Consultant Client W Environmental Engineering Client Engineer in Responsible Charge
Moral Intensity (Jones 1991):
Magnitude: high Probability: high immediate direct concentrated

Engineer A is obligated under responsible charge to actively and substantively review all design documents bearing their seal, exercising genuine technical judgment over every element. However, the competence boundary constraint recognizes that Engineer A lacks sufficient familiarity with the novel AI drafting tool to critically evaluate whether its outputs are technically sound, algorithmically biased, or subtly erroneous. This creates a genuine dilemma: signing off on documents fulfills the procedural dimension of responsible charge but violates its substantive dimension if Engineer A cannot competently assess what the AI produced. Conversely, refusing to seal documents until competence is established would delay the project and create contractual tensions with Client W. The engineer is caught between the formal duty to be in responsible charge and the epistemic constraint that prevents that charge from being meaningfully exercised.

Obligation Vs Constraint
Affects: Engineer A Groundwater Infrastructure Design Engineer Engineer in Responsible Charge Client W Environmental Engineering Client AI-Assisted Engineering Practitioner
Moral Intensity (Jones 1991):
Magnitude: high Probability: high near-term direct concentrated
Opening States (10)
AI-Generated Design Documents Non-Compliant State Engineer A Regulatory Compliance Obligation Client Data Exposed to Public Domain State Undisclosed AI Tool Use State Unfamiliar Tool Deployment State Mentor Support Absent State Non-Compliant AI-Generated Design State Engineer A Undisclosed AI Report Use Engineer A Undisclosed AI Design Document Use Engineer A Unfamiliar AI Tool Deployment
Key Takeaways
  • Engineers must proactively disclose AI tool usage to clients, as failure to do so violates transparency obligations even when the final work product meets technical standards.
  • Uploading confidential client data to open-source or third-party AI platforms without explicit client consent constitutes a breach of confidentiality obligations regardless of the engineer's intent or the quality of output produced.
  • Adopting novel tools like AI drafting assistants requires engineers to first verify their own competence in critically evaluating AI-generated outputs before incorporating them into professional deliverables.