Step 4: Full View
Entities, provisions, decisions, and narrative
Full Entity Graph
Loading...Entity Types
Synthesis Reasoning Flow
Shows how NSPE provisions inform questions and conclusions - the board's reasoning chainNode Types & Relationships
→ Question answered by Conclusion
→ Provision applies to Entity
NSPE Code Provisions Referenced
View ExtractionIII.3. III.3.
Full Text:
Engineers shall avoid all conduct or practice that deceives the public.
Applies To:
III.8.a. III.8.a.
Full Text:
Engineers shall conform with state registration laws in the practice of engineering.
Relevant Case Excerpts:
"Engineer A did not maintain responsible charge in violation of licensure law which violates Code section III.8.a."
Confidence: 95.0%
Applies To:
I.1. I.1.
Full Text:
Hold paramount the safety, health, and welfare of the public.
Relevant Case Excerpts:
"The errors in the AI-generated design documents could have led to regulatory noncompliance and safety hazards, conflicting with the Fundamental Canon I.1, “hold paramount the safety, health, and welfare of the public”. Engineer A’s oversight of engineering plans was inadequate, raising ethical concerns."
Confidence: 95.0%
Applies To:
I.2. I.2.
Full Text:
Perform services only in areas of their competence.
Relevant Case Excerpts:
"Culminating in the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in"
Confidence: 92.0%
Applies To:
I.5. I.5.
Full Text:
Avoid deceptive acts.
Relevant Case Excerpts:
"Fundamental Canon I.5 requires an Engineer to “avoid deceptive acts,” which was not violated here."
Confidence: 95.0%
Applies To:
II.1.c. II.1.c.
Full Text:
Engineers shall not reveal facts, data, or information without the prior consent of the client or employer except as authorized or required by law or this Code.
Relevant Case Excerpts:
"rmed a thorough review and cross-checked the work on the report, much like Engineer A would have likely done if the report had been initially drafted by an engineer intern or other support staff. Per Code section II.1.c, confidential information can only be shared with prior consent of the Client."
Confidence: 92.0%
Applies To:
II.2.a. II.2.a.
Full Text:
Engineers shall undertake assignments only when qualified by education or experience in the specific technical fields involved.
Relevant Case Excerpts:
"the key question: Is using AI adding a new tool to an engineer’s toolbox, or is it something more? Fundamental Canon I.2 states that engineers “perform services only in areas of their competence” and Code section II.2.a states that engineers must “undertake assignments only when qualified by education or experience in the specific technical fields involved.” Here, Engineer A, as an experienced environmental engineer"
Confidence: 92.0%
Applies To:
III.9. III.9.
Full Text:
Engineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.
Relevant Case Excerpts:
"Per Code section III.9, engineers are required to “give credit for engineering work to those to whom credit is due,” so Engineer A’s ethical use of the AI software would need to include appropriate citations."
Confidence: 95.0%
"AI, while not a human contributor, fundamentally shaped the report and design documents, warranting disclosure under Code section III.9, “[e]ngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.” There are currently no universal guidelines mandating AI"
Confidence: 95.0%
Applies To:
II.2.b. II.2.b.
Full Text:
Engineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.
Relevant Case Excerpts:
"rformed a thorough review, cross-checked key facts against professional sources, and made adjustments to the text, the final document remained under Engineer A’s direction and control, as required by Code section II.2.b, “[e]ngineers shall not affix their signatures to any plans or documents ."
Confidence: 95.0%
"ngineer A appears to be operating in a compromised manner – namely, without the help of Engineer B – such that Engineer A relied on the AI-generated plans and specifications without proper oversight. Code section II.2.b states that, “[e]ngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their di"
Confidence: 95.0%
Applies To:
Cited Precedent Cases
View ExtractionBER Case 90-6 analogizing linked
Principle Established:
It is ethical for an engineer to sign and seal documents created using a CADD system, whether prepared by the engineer themselves or by others working under their direction and control, provided the engineer has the requisite background, education, and training to be proficient with the technology and its limitations.
Citation Context:
The Board cited this case to establish historical precedent for the ethical use of computer-assisted drafting and design tools, and to show the BER's longstanding openness to new technologies in engineering practice, including early anticipation of AI.
Relevant Excerpts:
"Almost 35 years ago, in BER Case 90-6 , the BER looked at a hypothetical involving an engineer's use of computer assisted drafting and design tools."
"In BER Case 90-6 , the BER determined that it was ethical for an engineer to sign and seal documents that were created using a CADD system whether prepared by the engineer themselves or by other engineers working under their direction and control."
BER Case 98-3 distinguishing linked
Principle Established:
It is unethical for an engineer to offer services using new technology in areas where they lack competence and experience; technology has an important place in engineering practice but must never be a replacement or substitute for engineering judgment.
Citation Context:
The Board cited this case to establish that technology must never replace or substitute for engineering judgment, and to draw a parallel to Engineer A's insufficient review of AI-generated design documents, while also distinguishing Engineer A's situation by noting Engineer A is not incompetent unlike the engineer in that case.
Relevant Excerpts:
"BER Case 98-3 discussed a solicitation by mail for engineers to use new technology to help gain more work. The solicitation read "Now - - thanks to a revolutionary new CD-ROM - specifying, designing and costing out any construction project is as easy as pointing and clicking your mouse""
"it is the BER's view that under the facts, unlike the situation of BER Case 98-3 , Engineer A is not incompetent. The facts specifically note Engineer A has "several years of experience" and "strong technical expertise.""
"The BER notes that in BER Case 98-3 , the BER stated that technology must not replace or be used as a substitute for engineering judgement."
"BER Case 98-3 emphasized that engineers must acknowledge significant contributions by others."
Questions & Conclusions
View ExtractionQuestion 1 Board Question
Was Engineer A’s use of AI to create the report text ethical, given that Engineer A thoroughly checked the report?
Engineer A's use of AI in report writing was partly ethical, and partly unethical.
Question 2 Board Question
Was Engineer A’s use of AI-assisted drafting tools to create the engineering design documents ethical, given that Engineer A reviewed the design at a high level?
The use of AI-assisted drafting tools by Engineer A was not unethical per se.
The Board's conclusion that AI-assisted drafting tools are not unethical per se must be qualified by a competence threshold that Engineer A did not meet with respect to the design documents. Code provisions I.2 and II.2.a require that engineers perform services only within areas of their competence, and this obligation extends to the tools they deploy. When an engineer uses a novel, unfamiliar AI drafting tool - one newly released to market with no prior experience on the engineer's part - and then conducts only a cursory, high-level review of its outputs before sealing and submitting engineering design documents, the engineer has not satisfied the competence standard that makes AI tool use ethically permissible in the first place. The Board's permissive conclusion about AI drafting tools implicitly assumes that the engineer possesses sufficient understanding of the tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Engineer A lacked that understanding entirely. The resulting design documents contained misaligned dimensions and omitted safety features required by local regulations - defects that a competent, engaged review would have identified. Accordingly, the ethical permissibility of AI-assisted drafting tools is conditional, not categorical: it depends on whether the engineer has sufficient competence with the tool and applies sufficient verification rigor to maintain genuine responsible charge over the work product.
The Board's finding that Engineer A's use of AI was partly unethical with respect to the design documents is further supported by the public safety dimension that the Board did not fully develop. Code provision I.1 places the safety, health, and welfare of the public as the paramount obligation of a licensed engineer, and this obligation is not merely aspirational - it is the foundational constraint against which all other professional judgments must be measured. The AI-generated design documents submitted by Engineer A contained omitted safety features required by local regulations. These omissions were not caught by Engineer A's cursory review and were only identified by Client W. Had Client W not conducted an independent technical review, those deficient documents could have proceeded to construction, creating a direct risk to public safety. The fact that the error was caught before construction does not retroactively satisfy the responsible charge standard; the standard requires that the engineer's own review be sufficient to ensure compliance, not that a client's independent review serve as the final safety check. Engineer A's sealing of documents containing regulatory safety omissions - after only a cursory review - therefore implicates not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of Code provision I.1. The ethical violation in the design phase is accordingly more serious than a mere procedural lapse in review thoroughness: it represents a failure of the core public protection function that professional licensure exists to serve.
Question 3 Board Question
If the use of AI was acceptable, did Engineer A have an ethical obligation to disclose the use of AI in any form to the Client?
Similar to other software used in the design or detailing process, Engineer A has no professional or ethical obligation to disclose AI use to Client W (unless such disclosure is required under Engineer A’s contract with Client W).
The Board's conclusion that Engineer A has no universal ethical obligation to disclose AI use to Client W - analogizing AI tools to other engineering software - requires significant qualification in light of the specific facts of this case and must not be read as a blanket rule. The analogy to conventional engineering software breaks down in at least three respects. First, conventional design software such as CAD or finite element analysis tools operates deterministically on engineer-supplied inputs and produces outputs the engineer can fully audit; large language model AI generates probabilistic, non-deterministic text and design content whose provenance and accuracy the engineer cannot fully trace or verify. Second, the observable stylistic discontinuity in the report - which Client W independently detected, noting it read as if written by two different authors - created an implicit misrepresentation about the nature of the work product and its authorship. At the moment Client W raised that observation, Engineer A's silence became an act of omission that a reasonable client would regard as misleading, implicating Code provisions I.5 and III.3. Third, the design document defects - misaligned dimensions and omitted safety features - demonstrate that undisclosed AI-generated outputs in this case did reach a client and could have proceeded to construction without correction absent Client W's independent review. The Board's no-disclosure-obligation conclusion is therefore defensible only in circumstances where the engineer has exercised thorough, competent review of AI outputs and where no client inquiry or observable anomaly has created an affirmative duty to speak. In this case, neither condition was fully satisfied for the design documents, and the stylistic anomaly in the report created a specific moment at which silence was ethically problematic.
Question 4 Implicit
By uploading Client W's confidential site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c, and does this violation stand as a separate ethical breach from any question about AI disclosure or report quality?
Beyond the Board's finding that Engineer A's use of AI in report writing was partly ethical and partly unethical, a critical and independent ethical breach exists that the Board did not explicitly address: Engineer A violated the client confidentiality obligation by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining Client W's prior consent. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating a foreseeable risk of disclosure beyond Engineer A's control. This breach of Code provision II.1.c stands entirely apart from questions about report quality, AI disclosure, or design document accuracy. A competent engineer deploying any third-party software tool - particularly a newly released, open-source platform with unknown data handling practices - bears an independent obligation to evaluate whether inputting confidential client data is permissible under the client relationship before acting. Engineer A's failure to seek Client W's consent before uploading that data constitutes a separate and self-standing ethical violation that the Board's analysis of report quality and AI transparency does not cure or subsume.
In response to Q101: Engineer A's upload of Client W's confidential site data and groundwater monitoring information into an open-source AI platform constitutes an independent and discrete ethical violation of Code provision II.1.c, entirely separate from any question about report quality or AI disclosure. The confidentiality obligation is not contingent on whether the resulting work product is accurate, polished, or ultimately beneficial to the client. By inputting proprietary client data into a publicly accessible AI system without obtaining Client W's prior consent, Engineer A exposed that information to potential third-party access, retention, or reuse by the AI platform - consequences Engineer A could not control or fully anticipate, particularly given their admitted unfamiliarity with the software. This breach stands on its own ethical foundation: the harm is the unauthorized exposure itself, not merely any downstream misuse. A competent engineer deploying a novel open-source tool with client data bears an affirmative obligation to investigate the data handling, storage, and privacy policies of that tool before use, and to obtain explicit client consent if any confidential information will be transmitted to a third-party system. Engineer A did neither. This violation is not remediated by the thoroughness of the subsequent report review, by the accuracy of the final work product, or by any disclosure or non-disclosure decision regarding AI authorship.
Question 5 Implicit
Given that Engineer B's retirement removed the primary quality assurance mechanism Engineer A had relied upon, did Engineer A have an independent ethical obligation to arrange an alternative peer review process before undertaking a complex, dual-scope engagement involving an unfamiliar AI tool, rather than substituting AI-generated output for that professional oversight?
The Board's conclusion that AI-assisted drafting tools are not unethical per se must be qualified by a competence threshold that Engineer A did not meet with respect to the design documents. Code provisions I.2 and II.2.a require that engineers perform services only within areas of their competence, and this obligation extends to the tools they deploy. When an engineer uses a novel, unfamiliar AI drafting tool - one newly released to market with no prior experience on the engineer's part - and then conducts only a cursory, high-level review of its outputs before sealing and submitting engineering design documents, the engineer has not satisfied the competence standard that makes AI tool use ethically permissible in the first place. The Board's permissive conclusion about AI drafting tools implicitly assumes that the engineer possesses sufficient understanding of the tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Engineer A lacked that understanding entirely. The resulting design documents contained misaligned dimensions and omitted safety features required by local regulations - defects that a competent, engaged review would have identified. Accordingly, the ethical permissibility of AI-assisted drafting tools is conditional, not categorical: it depends on whether the engineer has sufficient competence with the tool and applies sufficient verification rigor to maintain genuine responsible charge over the work product.
The Board's analysis does not address a systemic professional vulnerability exposed by this case: Engineer A's over-reliance on AI tools was directly precipitated by the absence of the peer review and mentorship previously provided by Engineer B. When Engineer B retired, Engineer A lost not merely editorial guidance on technical writing but a substantive quality assurance mechanism that had been integral to Engineer A's professional practice. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, a professional review service, or a subconsultant - Engineer A substituted an unfamiliar AI tool for that oversight function. This substitution was ethically inadequate for two independent reasons. First, AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. Second, the substitution required uploading confidential client data to an open-source platform, compounding the ethical problem. Code provision II.2.a's competence obligation and the broader duty of diligence implicit in responsible charge together suggest that when an engineer's established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, high-stakes engagements - not to proceed with an untested technological substitute. The NSPE Code of Ethics does not currently provide explicit guidance on peer review succession planning, and this case illustrates that such guidance would meaningfully serve the profession.
The Board's finding that Engineer A's use of AI was partly unethical with respect to the design documents is further supported by the public safety dimension that the Board did not fully develop. Code provision I.1 places the safety, health, and welfare of the public as the paramount obligation of a licensed engineer, and this obligation is not merely aspirational - it is the foundational constraint against which all other professional judgments must be measured. The AI-generated design documents submitted by Engineer A contained omitted safety features required by local regulations. These omissions were not caught by Engineer A's cursory review and were only identified by Client W. Had Client W not conducted an independent technical review, those deficient documents could have proceeded to construction, creating a direct risk to public safety. The fact that the error was caught before construction does not retroactively satisfy the responsible charge standard; the standard requires that the engineer's own review be sufficient to ensure compliance, not that a client's independent review serve as the final safety check. Engineer A's sealing of documents containing regulatory safety omissions - after only a cursory review - therefore implicates not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of Code provision I.1. The ethical violation in the design phase is accordingly more serious than a mere procedural lapse in review thoroughness: it represents a failure of the core public protection function that professional licensure exists to serve.
In response to Q102: Engineer B's retirement did not merely create an inconvenience for Engineer A - it removed the primary quality assurance mechanism upon which Engineer A had structurally depended for professional-grade output, particularly in technical writing. When that mechanism was removed, Engineer A faced a dual-scope engagement of meaningful complexity: a comprehensive contaminant characterization report requiring synthesis of groundwater monitoring data, and engineering design documents for infrastructure modifications. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, contracting a third-party reviewer, or consulting with a professional organization - Engineer A substituted an unfamiliar, newly released open-source AI tool for that professional oversight. This substitution was not ethically neutral. The NSPE Code's competence provisions (I.2 and II.2.a) require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. An engineer who knows they have a recognized weakness in a critical deliverable component, who has lost their primary quality assurance resource, and who then deploys an untested tool as a replacement - without any independent verification of that tool's reliability - has not satisfied the competence standard. Engineer A had an independent ethical obligation to arrange alternative peer review before proceeding, and the failure to do so compounded every subsequent deficiency in both the report and the design documents.
Question 6 Implicit
When Client W observed that the report read as if written by two different authors, did Engineer A incur an immediate ethical obligation to proactively disclose the AI's role in drafting the more polished sections, or was silence in that moment itself a deceptive act under Code provisions I.5 and III.3?
The Board's conclusion that Engineer A has no universal ethical obligation to disclose AI use to Client W - analogizing AI tools to other engineering software - requires significant qualification in light of the specific facts of this case and must not be read as a blanket rule. The analogy to conventional engineering software breaks down in at least three respects. First, conventional design software such as CAD or finite element analysis tools operates deterministically on engineer-supplied inputs and produces outputs the engineer can fully audit; large language model AI generates probabilistic, non-deterministic text and design content whose provenance and accuracy the engineer cannot fully trace or verify. Second, the observable stylistic discontinuity in the report - which Client W independently detected, noting it read as if written by two different authors - created an implicit misrepresentation about the nature of the work product and its authorship. At the moment Client W raised that observation, Engineer A's silence became an act of omission that a reasonable client would regard as misleading, implicating Code provisions I.5 and III.3. Third, the design document defects - misaligned dimensions and omitted safety features - demonstrate that undisclosed AI-generated outputs in this case did reach a client and could have proceeded to construction without correction absent Client W's independent review. The Board's no-disclosure-obligation conclusion is therefore defensible only in circumstances where the engineer has exercised thorough, competent review of AI outputs and where no client inquiry or observable anomaly has created an affirmative duty to speak. In this case, neither condition was fully satisfied for the design documents, and the stylistic anomaly in the report created a specific moment at which silence was ethically problematic.
In response to Q103: When Client W directly observed that the report appeared to have been written by two different authors - a stylistically inconsistent observation that was, in fact, an accurate description of the report's dual-origin nature - Engineer A's silence in that moment was not ethically neutral. Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives the public or clients. Deception does not require an affirmative false statement; it can arise from deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission creates or sustains a materially false impression. Client W's comment was a direct, specific observation that implicitly invited clarification about the report's authorship. A client who is told their report reads as if written by two people is, in practical terms, asking why. Engineer A's failure to respond honestly - by acknowledging that AI software had generated the more polished sections - allowed Client W to proceed under the false impression that the entire report was the product of Engineer A's own professional authorship. This silence, in context, constitutes a deceptive act under I.5 and conduct that deceives under III.3, independent of whether disclosure was required before submission. The moment of Client W's observation created a discrete, time-specific obligation to clarify, and Engineer A's failure to do so transformed a prior omission into an active, ongoing misrepresentation.
Question 7 Implicit
Does Engineer A's failure to include citations to the professional journal articles used to cross-check AI-generated content constitute a violation of the obligation to give credit for engineering work under Code provision III.9, and does it additionally undermine the evidentiary foundation of a technical report that may inform regulatory or remediation decisions?
The Board's analysis does not address a systemic professional vulnerability exposed by this case: Engineer A's over-reliance on AI tools was directly precipitated by the absence of the peer review and mentorship previously provided by Engineer B. When Engineer B retired, Engineer A lost not merely editorial guidance on technical writing but a substantive quality assurance mechanism that had been integral to Engineer A's professional practice. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, a professional review service, or a subconsultant - Engineer A substituted an unfamiliar AI tool for that oversight function. This substitution was ethically inadequate for two independent reasons. First, AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. Second, the substitution required uploading confidential client data to an open-source platform, compounding the ethical problem. Code provision II.2.a's competence obligation and the broader duty of diligence implicit in responsible charge together suggest that when an engineer's established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, high-stakes engagements - not to proceed with an untested technological substitute. The NSPE Code of Ethics does not currently provide explicit guidance on peer review succession planning, and this case illustrates that such guidance would meaningfully serve the profession.
Engineer A's failure to cite the professional journal articles used to cross-check AI-generated content, and the absence of any attribution for the AI-generated text itself, raises an underexamined concern about the evidentiary integrity of a technical report that may inform regulatory decisions or remediation actions. Code provision III.9 requires engineers to give credit for engineering work to those to whom credit is due. While this provision is most commonly applied to prevent engineers from claiming credit for others' work, it also carries an affirmative dimension: a technical report submitted in a professional capacity implicitly represents that its intellectual content reflects the engineer's own analysis and judgment. Where substantial portions of the report's prose and synthesis were generated by an AI system, and where the factual cross-checking relied on professional journal articles that are not cited, the report's evidentiary foundation is obscured. Regulators, future engineers, or legal proceedings relying on the report cannot assess the quality of the underlying analysis, trace its sources, or evaluate the reliability of the AI-generated synthesis. This is particularly consequential for a report addressing an emerging contaminant of concern, where the scientific basis for conclusions may be contested and where the report may serve as a foundational document for remediation planning or regulatory compliance. The absence of attribution and citation therefore undermines not only intellectual honesty in authorship but also the professional reliability and traceability of the work product itself.
In response to Q104: Engineer A's failure to cite the professional journal articles used to cross-check AI-generated content raises a concern under Code provision III.9, which requires engineers to give credit for engineering work to those to whom credit is due. While III.9 is most commonly applied to crediting the work of other engineers, its underlying principle - that the intellectual and evidentiary foundations of professional work must be honestly attributed - extends to the sources that substantiate technical conclusions. In a report that may inform regulatory decisions or remediation actions affecting public health and environmental safety, the absence of citations to the scientific literature used to verify AI-generated claims is not merely a stylistic deficiency. It deprives Client W, regulators, and any subsequent reviewers of the ability to independently assess the evidentiary basis for the report's conclusions, to identify the scope and currency of the literature consulted, and to evaluate whether the cross-checking process was adequate. This omission undermines the epistemic integrity of the report as a professional document. Furthermore, in the context of an emerging contaminant of concern - a category of substance where scientific understanding is actively evolving - the failure to ground conclusions in cited, verifiable sources creates a foreseeable risk that outdated, incomplete, or AI-hallucinated information could go undetected by downstream users who rely on the report's apparent professional authority.
Question 8 Principle Tension
Does the principle of Professional Competence Satisfied for Report Writing conflict with the principle of Intellectual Honesty in Authorship when Engineer A's thorough factual verification of AI-generated text is used to justify sealing a report whose prose was substantially composed by a non-human system, potentially misrepresenting the nature and origin of the professional work product to Client W?
In response to Q201: A genuine tension exists between the principle that professional competence in report writing can be satisfied through thorough post-generation verification and the principle of intellectual honesty in authorship. The Board concluded that Engineer A's thorough review of the AI-generated report text was sufficient to render that use of AI ethical. However, this conclusion does not fully resolve the authorship integrity question. When an engineer applies their professional seal to a document, they represent to the client and to the public that the work reflects their professional judgment, expertise, and authorship. The seal is not merely a quality certification - it is an assertion of intellectual ownership and responsible charge. A report whose prose was substantially composed by a non-human language model, and whose authorship was personalized only through minor wording adjustments, does not straightforwardly satisfy that representation, even if every factual claim has been verified. The verification process confirms accuracy; it does not transform AI-generated prose into the engineer's own professional expression. These two principles can be reconciled only if the engineering profession explicitly adopts a framework - which it has not yet done - that defines AI-assisted authorship as a recognized and disclosed mode of professional work product creation. Absent such a framework, the tension remains unresolved, and the Board's conclusion on report ethics should be understood as provisional rather than definitive.
The tension between Professional Competence Satisfied for Report Writing and Intellectual Honesty in Authorship was left substantively unresolved by the Board. The Board accepted that Engineer A's thorough factual verification of AI-generated text satisfied the competence dimension of responsible charge for the report, but it did not squarely confront the authorship dimension: when an engineer personalizes AI-generated prose with only minor wording adjustments and submits it under a professional seal without attribution, the seal implicitly represents that the engineer is the intellectual author of the work product. These two principles pull in opposite directions - competence review can be satisfied by rigorous fact-checking, but intellectual honesty in authorship requires that the origin of the substantive prose be accurately represented. The case teaches that competence and authorship are distinct professional obligations, and that satisfying one does not discharge the other. A fully ethical resolution would have required Engineer A to either disclose the AI's generative role or to rewrite the report in their own voice after verification, rather than treating minor wording edits as sufficient to claim authorship.
Question 9 Principle Tension
Does the principle of Responsible Charge Engagement conflict with the principle of Competence Assurance Under Novel Tool Adoption when an engineer applies their professional seal to AI-generated design documents after only a cursory review, given that the seal legally certifies personal responsible charge over work whose generative process the engineer does not fully understand?
In response to Q202: The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption is not merely theoretical - it is demonstrated concretely by the outcome in this case. Engineer A applied their professional seal to AI-generated design documents after only a cursory, high-level review. The professional seal carries a legal and ethical certification that the engineer has exercised responsible charge over the work: that they understand its content, have directed its preparation, and can stand behind its technical adequacy. A cursory review of output generated by a novel AI drafting tool - one with which Engineer A had no prior experience and whose generative logic Engineer A did not fully understand - cannot satisfy that standard. The subsequent discovery of misaligned dimensions and omitted safety features required by local regulations confirms that the cursory review was substantively inadequate. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. Competence here encompasses not only domain knowledge in groundwater infrastructure design, but also sufficient understanding of the AI tool's outputs to certify their reliability. Engineer A possessed the former but demonstrably lacked the latter. The seal, in this context, was affixed in violation of II.2.b, and the tension between these two principles is resolved against Engineer A: responsible charge cannot be satisfied by reviewing outputs from a tool whose behavior the reviewing engineer does not adequately understand.
The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption was resolved against Engineer A in the design document context, but the resolution reveals a deeper principle hierarchy: when an engineer applies a professional seal, the seal does not merely certify that the engineer reviewed the output - it certifies that the engineer exercised personal, informed judgment over the generative process itself. Because Engineer A had no prior experience with the AI drafting tool and did not understand its full functionality, a cursory high-level review was structurally incapable of satisfying responsible charge, regardless of how much time was spent. The case teaches that the standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool: the less the engineer understands how the tool produces its output, the more rigorous the independent verification must be. Deploying an unfamiliar AI tool is not ethically equivalent to deploying familiar software; it introduces an epistemic gap that only deeper review - not a high-level scan - can close. Public Welfare Paramount ultimately overrides both efficiency and tool novelty as a justification for reduced oversight, particularly where safety-critical omissions in design documents could reach construction.
Question 10 Principle Tension
Does the principle of Client Data Confidentiality in AI Tool Use conflict with the principle of Mentorship Continuity and Succession Planning when an engineer, deprived of a trusted peer reviewer, turns to an open-source AI platform as a substitute quality assurance mechanism, thereby necessarily exposing confidential client data to a third-party system in order to compensate for the loss of professional oversight?
The tension between Client Data Confidentiality in AI Tool Use and Mentorship Continuity and Succession Planning exposes a systemic vulnerability that the Board's conclusions do not address: Engineer A's loss of Engineer B's peer review created professional pressure to substitute AI assistance for human oversight, but the only available AI tool was open-source, meaning that satisfying the need for quality assurance necessarily required exposing Client W's confidential site data and groundwater monitoring information to a public platform without prior consent. This creates a structural conflict in which the engineer cannot simultaneously honor the confidentiality obligation and use the available compensating mechanism. The case teaches that this conflict is not resolvable by choosing one principle over the other after the fact - it is resolvable only by proactive planning before the engagement begins. The principle of Mentorship Continuity and Succession Planning, read alongside the confidentiality obligation under Code provision II.1.c, implies that when a primary quality assurance mechanism is lost, the engineer's first obligation is to identify a compliant replacement - whether a qualified peer reviewer, a privacy-compliant AI platform, or a scope limitation - before accepting work that cannot be competently and confidentially performed alone. Engineer A's failure to engage in that prior planning rendered the confidentiality breach not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice.
Question 11 Principle Tension
Does the principle of Public Welfare Paramount conflict with the principle of AI Tool Transparency and Disclosure Applied to Client W Relationship when the Board concludes there is no universal ethical obligation to disclose AI use, yet the case demonstrates that undisclosed AI-generated design documents containing safety-critical omissions were submitted to a client and could have reached construction without correction had Client W not independently identified the defects?
In response to Q204: The Board's conclusion that there is no universal ethical obligation to disclose AI use is placed under significant strain by the facts of this case. The principle that public welfare is paramount - Code provision I.1 - is not merely aspirational; it functions as a constraint on every other professional decision an engineer makes. In this case, AI-generated design documents containing omitted safety features required by local regulations were submitted to Client W under Engineer A's professional seal. Had Client W not independently identified these deficiencies, the documents could have proceeded toward construction in a non-compliant and potentially dangerous state. The Board's general conclusion about disclosure is grounded in an analogy to other software tools used in engineering practice - an analogy that may hold when the tool is well-understood, widely validated, and used within established professional norms. It does not hold with equal force when the tool is newly released, unfamiliar to the practitioner, and demonstrably capable of generating safety-critical omissions that a cursory review failed to catch. In such circumstances, the public welfare principle does not merely permit disclosure - it may affirmatively require it, because disclosure enables the client and downstream reviewers to apply appropriate scrutiny to outputs whose reliability has not been professionally validated. The Board's conclusion on disclosure should therefore be understood as conditional: it applies when AI tools are used competently and their outputs are rigorously verified, not when they are deployed as substitutes for professional judgment with only superficial review.
The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption was resolved against Engineer A in the design document context, but the resolution reveals a deeper principle hierarchy: when an engineer applies a professional seal, the seal does not merely certify that the engineer reviewed the output - it certifies that the engineer exercised personal, informed judgment over the generative process itself. Because Engineer A had no prior experience with the AI drafting tool and did not understand its full functionality, a cursory high-level review was structurally incapable of satisfying responsible charge, regardless of how much time was spent. The case teaches that the standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool: the less the engineer understands how the tool produces its output, the more rigorous the independent verification must be. Deploying an unfamiliar AI tool is not ethically equivalent to deploying familiar software; it introduces an epistemic gap that only deeper review - not a high-level scan - can close. Public Welfare Paramount ultimately overrides both efficiency and tool novelty as a justification for reduced oversight, particularly where safety-critical omissions in design documents could reach construction.
From a deontological perspective, did Engineer A fulfill their duty of candor toward Client W by submitting AI-generated work products without disclosure, regardless of whether the final outputs were accurate?
In response to Q301: From a deontological perspective, Engineer A did not fulfill their duty of candor toward Client W. Kantian deontological ethics evaluates the moral worth of an action by reference to the maxim underlying it and whether that maxim could be universalized without contradiction. The maxim implicit in Engineer A's conduct - that an engineer may submit AI-generated work products under their professional seal without disclosing the AI's role, provided the outputs are verified for accuracy - cannot be universalized without undermining the foundational trust relationship between licensed professionals and their clients. If all engineers adopted this maxim, the professional seal would cease to function as a reliable signal of personal authorship and responsible charge, and clients would be systematically deprived of information material to their assessment of the work product's provenance and reliability. Furthermore, the duty of candor is not contingent on outcome: it is not satisfied by the fact that the report was accurate or that the design errors were caught. Deontological ethics holds that the duty to be honest with those who rely on one's professional representations exists independently of whether the deception caused harm. Engineer A's silence about AI's role - particularly in the face of Client W's direct observation about the report's stylistic inconsistency - constitutes a breach of the duty of candor that is not remediated by the quality of the final work product.
From a deontological perspective, did Engineer A breach their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed?
In response to Q302: From a deontological perspective, Engineer A breached their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed. Responsible Charge is not a procedural formality - it is a substantive professional and ethical duty that requires the engineer to have directed the work, to understand its content, and to be able to certify its technical adequacy. The professional seal is the outward expression of that duty, and affixing it to documents that have not been adequately reviewed is a categorical violation regardless of intent or outcome. From a deontological standpoint, the duty is breached at the moment of sealing, not at the moment of harm. The fact that Client W identified the errors before construction does not retroactively satisfy the Responsible Charge obligation; it merely prevented the consequences from being worse. Code provision II.2.b makes this categorical character explicit: engineers shall not affix their signatures to plans dealing with subject matter in which they lack competence. Engineer A's unfamiliarity with the AI drafting tool's outputs, combined with a cursory review that failed to detect regulatory non-compliance, establishes that the competence threshold was not met at the time of sealing. The deontological analysis therefore yields a clear conclusion: the duty was breached, independently of any consequentialist assessment of harm.
From a consequentialist perspective, did the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features - outweigh any efficiency benefits gained from using AI-assisted drafting tools, and does this outcome retroactively render the decision to use those tools unethical?
In response to Q303: From a consequentialist perspective, the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features required by local regulations - does outweigh the efficiency benefits gained from using AI-assisted drafting tools in the design phase, and this outcome is ethically significant even though the errors were caught before construction. Consequentialist analysis evaluates actions by their expected outcomes, including foreseeable risks. A competent engineer deploying a novel, untested AI drafting tool for safety-critical infrastructure design, with no prior experience and only a cursory review process, creates a foreseeable probability of undetected errors reaching construction. The actual outcome - regulatory non-compliance and safety omissions - was not an improbable accident; it was a predictable consequence of an inadequate verification process applied to an unreliable generative tool. The efficiency gain from AI-assisted drafting is real but modest relative to the risk: the time saved in initial document generation was offset by the need for revision, the erosion of client trust, and the potential - had Client W not been diligent - for construction of non-compliant infrastructure. Consequentialist ethics does not require that harm actually occur to render a decision unethical; it requires that the expected value of the action, accounting for foreseeable risks, be negative. Here, the expected value of deploying an unfamiliar AI tool with cursory review for safety-critical design work was negative at the time of the decision, and the actual outcome confirms that assessment.
From a virtue ethics perspective, did Engineer A exhibit the prudence and professional humility expected of a competent engineer by choosing to deploy a novel, unfamiliar AI drafting tool - with no prior experience - as a substitute for the mentorship and peer review previously provided by Engineer B, rather than seeking alternative qualified oversight?
In response to Q305: From a virtue ethics perspective, Engineer A did not exhibit the prudence and professional humility expected of a competent engineer in choosing to deploy a novel, unfamiliar AI drafting tool as a substitute for the mentorship and peer review previously provided by Engineer B. Prudence - the virtue of practical wisdom applied to professional decision-making - requires an engineer to accurately assess their own capabilities and limitations, to recognize the boundaries of their competence, and to seek appropriate resources when those boundaries are approached. Engineer A's self-acknowledged weakness in technical writing, combined with the loss of Engineer B's quality assurance function, created a situation that called for heightened caution and deliberate compensatory measures. Instead, Engineer A responded by introducing a second source of uncertainty: an AI tool that was new to the market, open-source, and entirely unfamiliar to Engineer A. Professional humility would have led Engineer A to recognize that substituting one unknown - AI-generated output - for a known quality assurance resource - Engineer B's expert review - does not reduce professional risk; it compounds it. A prudent engineer in Engineer A's position would have sought an alternative qualified peer reviewer, disclosed the limitation to Client W, or scoped the engagement to match their verified capabilities. The choice to proceed without these safeguards reflects not merely a procedural lapse but a deficit in the practical wisdom that the engineering profession requires of its licensed practitioners.
From a consequentialist perspective, did Engineer A's decision to input Client W's confidential site data into open-source AI software - without obtaining prior consent - create a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and should that risk calculus have been apparent to a competent engineer before acting?
In response to Q306: From a consequentialist perspective, Engineer A's decision to input Client W's confidential site data into open-source AI software without prior consent created a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and that risk calculus should have been apparent to a competent engineer before acting. Open-source AI platforms are, by their nature, systems whose data handling, retention, training data incorporation, and third-party access policies are not under the control of the user. A competent engineer - particularly one engaged in environmental consulting involving site-specific groundwater data that may have regulatory, litigation, or competitive sensitivity - bears a professional obligation to investigate how any third-party system will handle client data before transmitting it. The efficiency benefit of AI-assisted drafting is real but bounded: it accelerates initial document generation. The risk created by uploading confidential client data to an unvetted public platform is potentially unbounded: it includes regulatory exposure, competitive harm, litigation risk, and reputational damage to Client W. A consequentialist analysis that assigns even a modest probability to these harms - and a competent engineer should have assigned a non-trivial probability - yields a negative expected value for the decision to use open-source AI without consent. The fact that Engineer A was unfamiliar with the AI software's full functionality, including its data handling practices, does not mitigate this conclusion; it reinforces it, because proceeding under conditions of ignorance about foreseeable risks is itself a consequentialist failure.
From a virtue ethics perspective, did Engineer A demonstrate the professional integrity and intellectual honesty expected of a licensed engineer by personalizing AI-generated report text with only minor wording adjustments and presenting it under their professional seal without attribution, even if the content was factually verified?
In response to Q304: From a virtue ethics perspective, Engineer A did not demonstrate the professional integrity and intellectual honesty expected of a licensed engineer in the report authorship process. Virtue ethics evaluates conduct by reference to the character traits and dispositions that a person of practical wisdom - a phronimos - would exhibit in the relevant professional role. A licensed engineer of good character, confronted with a recognized weakness in technical writing and the loss of their primary quality assurance resource, would seek transparent solutions: engaging a peer reviewer, disclosing limitations to the client, or explicitly attributing AI assistance in the work product. Engineer A instead chose a path that preserved the appearance of unassisted professional authorship while relying substantially on AI-generated prose. The minor wording adjustments made to personalize the content do not constitute the kind of intellectual engagement that transforms another's expression into one's own. A person of practical wisdom would recognize that submitting AI-generated text under a professional seal - without attribution, and in the face of a client's direct observation about stylistic inconsistency - is not merely a procedural omission but a failure of intellectual honesty. The virtue of integrity requires consistency between one's professional representations and the actual nature of one's work. Engineer A's conduct fell short of that standard, regardless of the report's factual accuracy.
Question 18 Counterfactual
If Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent to the thorough review applied to the report - rather than a cursory high-level check, would the safety omissions and dimensional errors have been caught before submission to Client W, and would that level of review have been sufficient to satisfy the Responsible Charge standard?
In response to Q402: If Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent in thoroughness to the review applied to the report - the safety omissions and dimensional errors would very likely have been identified before submission to Client W, and such a review would have been substantially more likely to satisfy the Responsible Charge standard. The case establishes a clear asymmetry: Engineer A's thorough review of the report was sufficient to catch factual inaccuracies and verify content quality, while the cursory review of the design documents was not sufficient to detect regulatory non-compliance and dimensional errors. This asymmetry suggests that the review process, not the use of AI per se, was the determinative variable in the design document failure. A rigorous review - one that checked each dimension against site survey data, verified each specification against local regulatory requirements, and confirmed the presence of all required safety features - would have functioned as an adequate Responsible Charge mechanism even for AI-generated outputs, provided the reviewing engineer possessed the domain competence to evaluate what they were reviewing. Engineer A possessed that domain competence in groundwater infrastructure design. The ethical failure was therefore not the use of AI drafting tools, but the decision to apply a cursory rather than rigorous review standard to safety-critical outputs from an untested tool. This counterfactual reinforces the Board's conclusion that AI-assisted drafting is not unethical per se, while clarifying that the adequacy of the review process is the critical ethical variable.
Question 19 Counterfactual
If Engineer A had explicitly cited the use of AI software in the report - including identifying which sections were AI-generated and which were independently authored - would Client W's observation that the report 'read as if written by two different authors' have raised or resolved concerns about the reliability and professional accountability of the work product?
In response to Q404: If Engineer A had explicitly cited the use of AI software in the report - identifying which sections were AI-generated and which were independently authored - Client W's observation that the report read as if written by two different authors would have been resolved rather than raised as a concern. The stylistic inconsistency that Client W detected was, in fact, an accurate artifact of the report's dual-origin nature: AI-generated prose tends to exhibit a characteristic uniformity and polish that differs from the more variable style of human technical writing, particularly from an engineer who self-identifies as less confident in technical writing. Explicit attribution would have provided Client W with a framework for understanding and contextualizing that observation, transforming a source of unease into a transparent feature of the work product. However, explicit attribution would also have raised a different set of questions: it would have invited Client W to scrutinize the AI-generated sections more carefully, to inquire about the AI tool's data handling practices, and potentially to raise concerns about the confidential data exposure that had already occurred. In this sense, disclosure would have been simultaneously clarifying and consequential - it would have resolved the authorship ambiguity while potentially surfacing the deeper confidentiality violation. This counterfactual suggests that the ethical case for disclosure is stronger than the Board's agnostic conclusion implies: transparency about AI use not only serves intellectual honesty but also enables clients to exercise informed oversight of work products that may have been generated under conditions they would not have approved.
Question 20 Counterfactual
If Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, would Engineer A have been obligated to decline the use of AI tools entirely or to seek a privacy-compliant alternative, and how would that have affected the deliverables?
In response to Q401: If Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, Engineer A would have faced a clear ethical fork: either decline the use of open-source AI tools entirely, or identify a privacy-compliant alternative - such as an enterprise AI system with contractual data protection guarantees, or a locally deployed model with no external data transmission. The obligation to decline would not have been absolute; it would have been an obligation to find a compliant solution or to proceed without AI assistance. This counterfactual illuminates a structural point: the ethical failure was not the decision to use AI per se, but the decision to use a specific category of AI tool - open-source, publicly accessible - without first obtaining client consent for the data exposure that use necessarily entailed. Had Engineer A followed the disclosure-and-consent pathway, the subsequent work product might have been produced differently - perhaps with a privacy-compliant AI tool, perhaps without AI assistance at all - but the client relationship and the engineer's ethical standing would have been preserved. The counterfactual also suggests that the Board's conclusion that AI use is not unethical per se should be understood as conditional on the use of appropriate tools under appropriate consent frameworks, not as a blanket endorsement of any AI tool for any purpose.
Question 21 Counterfactual
If Engineer B had not retired and had continued to provide quality assurance review of Engineer A's work products, would Engineer A have been less likely to over-rely on AI tools, and does the absence of mentorship create a systemic professional vulnerability that the NSPE Code of Ethics should address through explicit guidance on peer review succession planning?
The tension between Client Data Confidentiality in AI Tool Use and Mentorship Continuity and Succession Planning exposes a systemic vulnerability that the Board's conclusions do not address: Engineer A's loss of Engineer B's peer review created professional pressure to substitute AI assistance for human oversight, but the only available AI tool was open-source, meaning that satisfying the need for quality assurance necessarily required exposing Client W's confidential site data and groundwater monitoring information to a public platform without prior consent. This creates a structural conflict in which the engineer cannot simultaneously honor the confidentiality obligation and use the available compensating mechanism. The case teaches that this conflict is not resolvable by choosing one principle over the other after the fact - it is resolvable only by proactive planning before the engagement begins. The principle of Mentorship Continuity and Succession Planning, read alongside the confidentiality obligation under Code provision II.1.c, implies that when a primary quality assurance mechanism is lost, the engineer's first obligation is to identify a compliant replacement - whether a qualified peer reviewer, a privacy-compliant AI platform, or a scope limitation - before accepting work that cannot be competently and confidentially performed alone. Engineer A's failure to engage in that prior planning rendered the confidentiality breach not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice.
Rich Analysis Results
View ExtractionCausal-Normative Links 6
Conducted Thorough Report Review
- Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
- AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review
- Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
Used AI for Design Document Generation
- AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
- Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
- Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
- AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents
- Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement
Input Confidential Data into Public AI
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A
- Client Consent for Third-Party Data Sharing Obligation
Submitted Report Without AI Disclosure
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
- AI Tool Disclosure Obligation
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
Conducted Cursory Design Document Review
- Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents
- AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- AI-Generated Work Product Competence Verification Obligation Breached By Engineer A In Design Document Review
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Regulatory Compliance Verification Obligation Breached By Engineer A In Design Document Submission
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
- Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
- Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
Chose AI for Report Drafting
- Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool
- Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement
Question Emergence 21
Triggering Events
- AI Report Draft Generated
- AI Design Documents Generated
- Report Stylistic Inconsistency Detected
Triggering Actions
- Submitted Report Without AI Disclosure
- Chose AI for Report Drafting
- Used AI for Design Document Generation
Competing Warrants
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
- Client W Engagement Established
Triggering Actions
- Conducted Thorough Report Review
- Submitted Report Without AI Disclosure
- Chose AI for Report Drafting
Competing Warrants
- Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report Intellectual Authorship Integrity Obligation Breached By Engineer A In Report Submission
- Professional Competence Satisfied for Report Writing But Questioned for AI Tool Verification Intellectual Honesty In Authorship Invoked By Engineer A Report
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
- Client W Engagement Established
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
- Client W Engagement Established
Triggering Actions
- Chose AI for Report Drafting
- Conducted Thorough Report Review
- Submitted Report Without AI Disclosure
Competing Warrants
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
- Intellectual Integrity in Authorship Applied to AI Report Drafting Professional Competence Invoked By Engineer A In AI Tool Selection
Triggering Events
- Confidential Data Exposed to AI
- Client W Engagement Established
Triggering Actions
- Input Confidential Data into Public AI
Competing Warrants
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
- Client Data Confidentiality in AI Tool Use Violated by Engineer A Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
- Confidential Client Data Input Constraint Engineer A Open-Source AI Upload AI-Generated Work Product Disclosure Constraint Engineer A Report Submission
Triggering Events
- Engineer B Retirement Occurs
- Client W Engagement Established
- AI Design Documents Generated
- Design Document Defects Discovered
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance Peer Review Absence Compensation Constraint Engineer A Post-Engineer B Retirement
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
- Client W Engagement Established
Triggering Actions
- Submitted Report Without AI Disclosure
- Conducted Thorough Report Review
- Chose AI for Report Drafting
Competing Warrants
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Intellectual Authorship Integrity Obligation Breached By Engineer A In Report Submission
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
Triggering Actions
- Conducted Thorough Report Review
- Submitted Report Without AI Disclosure
- Chose AI for Report Drafting
Competing Warrants
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- Attribution and Citation Integrity in AI-Assisted Work Applied to Environmental Report Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
- Engineer B Retirement Occurs
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Diligent Verification of AI-Generated Technical Outputs Violated in Design Phase Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool
- AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Triggering Events
- Engineer B Retirement Occurs
- Client W Engagement Established
- AI Design Documents Generated
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Triggering Events
- Client W Engagement Established
- Confidential Data Exposed to AI
Triggering Actions
- Input Confidential Data into Public AI
- Chose AI for Report Drafting
Competing Warrants
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool
Triggering Events
- Engineer B Retirement Occurs
- AI Design Documents Generated
- Design Document Defects Discovered
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
- Client W Engagement Established
Triggering Actions
- Submitted Report Without AI Disclosure
- Conducted Thorough Report Review
- Chose AI for Report Drafting
Competing Warrants
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
Triggering Actions
- Chose AI for Report Drafting
- Conducted Thorough Report Review
- Submitted Report Without AI Disclosure
Competing Warrants
- Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
- AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
- Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
Triggering Events
- Engineer B Retirement Occurs
- Confidential Data Exposed to AI
- Client W Engagement Established
Triggering Actions
- Input Confidential Data into Public AI
- Chose AI for Report Drafting
Competing Warrants
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement
- Client Data Confidentiality in AI Tool Use Violated by Engineer A Mentorship Continuity and Succession Planning Implicated in AI Over-Reliance
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
- Client W Engagement Established
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
- Submitted Report Without AI Disclosure
Competing Warrants
- Public Welfare Paramount Invoked By Omission Of Safety Features In Design Documents AI Tool Transparency and Disclosure Applied to Client W Relationship
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
Triggering Events
- AI Report Draft Generated
- Report Stylistic Inconsistency Detected
- Client W Engagement Established
Triggering Actions
- Chose AI for Report Drafting
- Submitted Report Without AI Disclosure
- Conducted Thorough Report Review
Competing Warrants
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
Triggering Events
- Client W Engagement Established
- Confidential Data Exposed to AI
Triggering Actions
- Input Confidential Data into Public AI
- Chose AI for Report Drafting
Competing Warrants
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
Triggering Events
- AI Design Documents Generated
- Design Document Defects Discovered
Triggering Actions
- Used AI for Design Document Generation
- Conducted Cursory Design Document Review
Competing Warrants
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
Resolution Patterns 28
Determinative Principles
- Kantian universalizability: the maxim of submitting AI-generated work without disclosure cannot be universalized without undermining the professional seal as a signal of personal authorship
- Duty of candor is non-contingent on outcome — it exists independently of whether deception caused harm
- Silence in the face of a direct client observation about stylistic inconsistency constitutes active breach of candor
Determinative Facts
- Engineer A submitted AI-generated work products under their professional seal without disclosing the AI's role
- Client W directly observed that the report read as if written by two different authors, and Engineer A remained silent
- The report was ultimately accurate and design errors were caught, yet the Board held this does not remediate the candor breach
Determinative Principles
- Consequentialist analysis evaluates actions by expected outcomes including foreseeable risks, not only actual harm
- A competent engineer deploying a novel, untested AI tool for safety-critical work with only cursory review creates a foreseeable — not improbable — probability of undetected errors
- Expected value of the decision was negative at the time it was made, and actual outcome confirms that assessment
Determinative Facts
- AI-generated design documents contained misaligned dimensions and omitted safety features required by local regulations
- Engineer A had no prior experience with the AI drafting tool and applied only a cursory review to safety-critical infrastructure design
- The efficiency gain from AI-assisted drafting was offset by the need for revision, erosion of client trust, and the foreseeable risk of construction of non-compliant infrastructure
Determinative Principles
- Affirmative attribution obligation under III.9 extends to transparency about the intellectual origin of professional work product
- Evidentiary integrity and traceability of technical reports that may inform regulatory or remediation decisions
- Intellectual honesty in authorship as a component of professional reliability
Determinative Facts
- Engineer A failed to cite the professional journal articles used to cross-check AI-generated content
- Substantial portions of the report's prose and synthesis were AI-generated without any attribution
- The report addresses an emerging contaminant of concern and may serve as a foundational document for remediation planning or regulatory compliance
Determinative Principles
- Client confidentiality obligation is absolute and not contingent on downstream accuracy or benefit of the work product
- Affirmative pre-use duty to investigate data handling and privacy policies of novel third-party tools before inputting confidential client information
- The harm of unauthorized exposure is the breach itself, independent of whether misuse occurs
Determinative Facts
- Engineer A uploaded Client W's confidential site data and groundwater monitoring information to an open-source AI platform without obtaining prior consent
- Engineer A was admittedly unfamiliar with the software and therefore could not assess or control its data handling, storage, or reuse practices
- No investigation of the platform's privacy policies was conducted and no explicit client consent was sought before transmission
Determinative Principles
- Disclosure and consent as preconditions for ethically permissible use of third-party AI tools with client data
- AI use is not unethical per se but is conditional on appropriate tool selection and consent frameworks
- Engineer's obligation to find compliant alternatives rather than simply declining engagement
Determinative Facts
- Had Engineer A disclosed intended AI use, Client W's hypothetical refusal would have created a clear ethical fork requiring a compliant alternative or no AI use
- Privacy-compliant alternatives existed, such as enterprise AI systems with contractual data protections or locally deployed models
- The ethical failure was use of a specific category of tool — open-source, publicly accessible — without consent, not AI use in general
Determinative Principles
- Responsible Charge requires review thoroughness proportionate to the safety-criticality and novelty of the output
- Review process adequacy, not AI use per se, is the determinative ethical variable for design document quality
- Domain competence of the reviewing engineer is a necessary but insufficient condition for satisfying Responsible Charge
Determinative Facts
- Engineer A's thorough review of the report successfully caught factual inaccuracies, demonstrating that rigorous review was feasible and effective
- Engineer A's cursory review of the design documents failed to detect regulatory non-compliance and dimensional errors that a line-by-line review would likely have caught
- Engineer A possessed domain competence in groundwater infrastructure design sufficient to evaluate the design documents had a rigorous review been applied
Determinative Principles
- Transparency about AI use enables clients to exercise informed oversight of work products
- Intellectual honesty in authorship requires attribution of non-human generative contributions
- Disclosure simultaneously resolves authorship ambiguity and surfaces deeper underlying violations
Determinative Facts
- Client W independently observed that the report read as if written by two different authors, accurately detecting the dual-origin nature of the document
- AI-generated prose exhibits characteristic uniformity and polish that differs detectably from Engineer A's more variable human writing style
- Explicit attribution would have invited Client W to scrutinize AI-generated sections and inquire about data handling, potentially surfacing the confidentiality violation that had already occurred
Determinative Principles
- Intellectual Honesty in Authorship
- Professional Competence Satisfied for Report Writing
- AI Tool Transparency and Disclosure
Determinative Facts
- Engineer A thoroughly verified the factual accuracy of AI-generated report text before sealing
- Engineer A made only minor wording adjustments to AI-generated prose rather than rewriting it in their own voice
- Engineer A submitted the report under a professional seal without any attribution to or disclosure of the AI's generative role
Determinative Principles
- Client Data Confidentiality in AI Tool Use
- Mentorship Continuity and Succession Planning
- Proactive Professional Practice Planning
Determinative Facts
- Engineer A uploaded Client W's confidential site data and groundwater monitoring information to an open-source AI platform without obtaining prior client consent
- Engineer B's retirement removed the primary quality assurance mechanism Engineer A had relied upon, creating professional pressure to substitute AI assistance for human oversight
- The only available AI tool was open-source, meaning that using it as a compensating quality assurance mechanism necessarily exposed confidential data to a public platform
Determinative Principles
- Virtue ethics evaluates conduct by reference to the character traits a person of practical wisdom (phronimos) would exhibit in the relevant professional role
- Integrity requires consistency between professional representations and the actual nature of one's work
- Minor wording adjustments to AI-generated prose do not constitute the intellectual engagement that transforms another's expression into one's own
Determinative Facts
- Engineer A made only minor wording adjustments to AI-generated report text and presented it under their professional seal without attribution
- Client W directly observed that the report read as if written by two different authors, and Engineer A did not disclose the AI's role
- Engineer A recognized a weakness in technical writing and had lost their primary quality assurance resource, yet chose to preserve the appearance of unassisted professional authorship rather than seek transparent alternatives
Determinative Principles
- Ethical permissibility of AI tool use is conditional on the engineer possessing sufficient competence with the specific tool to exercise meaningful professional judgment over its outputs
- Responsible charge requires more than nominal review — it requires verification rigor proportionate to the engineer's familiarity with the tool's capabilities and failure modes
- Competence obligations extend to the tools deployed, not merely to the subject matter of the engineering work
Determinative Facts
- The AI drafting tool was newly released to market and Engineer A had no prior experience with it
- Engineer A conducted only a cursory, high-level review of the AI-generated design documents before sealing and submitting them
- The design documents contained misaligned dimensions and omitted safety features required by local regulations — defects a competent engaged review would have caught
Determinative Principles
- Responsible Charge requires meaningful professional oversight of work products before sealing
- Competence standard applies differentially depending on depth of review actually performed
- Professional seal certifies personal accountability for accuracy and safety of work product
Determinative Facts
- Engineer A thoroughly checked the AI-generated report text, catching and correcting factual errors
- Engineer A reviewed the AI-generated design documents only at a high level, missing misaligned dimensions and omitted safety features
- The design documents contained safety-critical defects that Client W independently identified
Determinative Principles
- AI drafting tools are instrumentally neutral — ethical permissibility depends on how they are used, not their existence
- Professional competence and responsible charge are the operative standards, not the identity of the drafting mechanism
- Engineers may adopt new tools provided they maintain genuine oversight and accountability over outputs
Determinative Facts
- AI-assisted drafting tools are analogous to other software used in engineering practice
- The board found no categorical prohibition on AI use in the NSPE Code provisions
- Engineer A's report use of AI, when paired with thorough review, produced an acceptable work product
Determinative Principles
- No universal disclosure obligation exists absent contractual requirement or active deception
- Professional seal and responsible charge, not authorship attribution, are the operative accountability mechanisms in engineering
- Disclosure obligations are triggered by contract terms or affirmative misrepresentation, not by tool use alone
Determinative Facts
- Engineer A's contract with Client W did not contain an explicit AI disclosure requirement
- The board analogized AI tools to other software routinely used in engineering without mandatory disclosure
- Engineer A did not affirmatively misrepresent the nature of the work product when asked about authorship
Determinative Principles
- Client confidentiality is an independent, affirmative obligation that precedes and is separate from questions of work product quality
- A competent engineer must evaluate data-handling risks of third-party platforms before inputting confidential client information
- Foreseeable risk of disclosure to third parties or AI training datasets constitutes a breach regardless of whether actual harm materialized
Determinative Facts
- Engineer A uploaded Client W's proprietary site data and groundwater monitoring information to an open-source AI platform
- Engineer A did not obtain Client W's prior consent before uploading the confidential data
- Open-source AI platforms may retain, process, or incorporate user-submitted data into training datasets, creating foreseeable third-party exposure
Determinative Principles
- Prudence as practical wisdom in professional decision-making
- Professional humility requiring accurate self-assessment of limitations
- Competence assurance under novel tool adoption
Determinative Facts
- Engineer A self-acknowledged weakness in technical writing, making quality assurance especially critical
- Engineer B's retirement removed the established quality assurance mechanism Engineer A had relied upon
- The AI tool was new to the market, open-source, and entirely unfamiliar to Engineer A at the time of deployment
Determinative Principles
- Public Welfare Paramount
- Responsible Charge Engagement
- Competence Assurance Under Novel Tool Adoption
Determinative Facts
- Engineer A had no prior experience with the AI drafting tool and did not fully understand its functionality
- Engineer A conducted only a cursory high-level review of the AI-generated design documents before sealing them
- The AI-generated design documents contained safety-critical omissions and dimensional errors that were identified by Client W, not by Engineer A
Determinative Principles
- Duty of candor and non-deception toward client
- Affirmative obligation to speak when silence becomes misleading
- Conditional nature of disclosure obligation based on review thoroughness and absence of anomaly
Determinative Facts
- Client W independently detected a stylistic discontinuity, noting the report read as if written by two different authors
- Engineer A remained silent when Client W raised the stylistic anomaly, converting omission into implicit misrepresentation
- AI-generated design documents contained misaligned dimensions and omitted safety features that reached the client before correction
Determinative Principles
- Affirmative obligation to arrange functionally equivalent quality assurance when an established oversight mechanism becomes unavailable
- AI tools cannot substitute for independent professional peer review judgment
- Competence and diligence obligations attach to the selection of quality assurance mechanisms, not only to technical execution
Determinative Facts
- Engineer B's retirement eliminated the primary quality assurance and peer review mechanism Engineer A had relied upon for professional practice
- Engineer A substituted an unfamiliar open-source AI tool for that oversight function without arranging any alternative peer review
- The AI substitution required uploading confidential client data to a public platform, compounding the ethical failure
Determinative Principles
- Public safety as the paramount and non-negotiable foundational obligation of professional licensure
- Responsible charge standard requires the engineer's own review to be sufficient — client review cannot serve as the final safety check
- Sealing documents certifies personal professional accountability for regulatory compliance, not merely formal completion of review
Determinative Facts
- AI-generated design documents contained omitted safety features required by local regulations
- Engineer A's cursory review failed to catch these omissions; they were identified only by Client W's independent technical review
- The deficient documents were sealed and submitted, meaning they could have proceeded to construction had Client W not intervened
Determinative Principles
- Deception does not require an affirmative false statement — deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission sustains a materially false impression constitutes a deceptive act
- A client's direct, specific observation about authorial inconsistency creates a discrete, time-specific obligation to clarify
- Silence that transforms a prior omission into an active, ongoing misrepresentation is independently actionable under the Code
Determinative Facts
- Client W directly observed that the report appeared to have been written by two different authors — an observation that was factually accurate given the report's dual-origin nature
- Engineer A remained silent in response to Client W's comment rather than acknowledging that AI software had generated the more polished sections
- Client W's observation implicitly invited clarification about the report's authorship, creating a contextual expectation of honest response
Determinative Principles
- The obligation to give credit for engineering work extends to the scientific and evidentiary sources that substantiate technical conclusions, not only to the work of other engineers
- In a report informing regulatory or remediation decisions affecting public health, the absence of citations deprives downstream users of the ability to independently assess the evidentiary basis for conclusions
- In an emerging contaminant context where scientific understanding is actively evolving, uncited cross-checking creates foreseeable risk that outdated or AI-hallucinated information will go undetected
Determinative Facts
- Engineer A used professional journal articles to cross-check AI-generated content but failed to include citations to those articles in the report
- The report concerned an emerging contaminant of concern — a category where scientific understanding is actively evolving — making citation to current, verifiable sources especially critical
- The report may inform regulatory decisions or remediation actions affecting public health and environmental safety, meaning downstream users depend on its evidentiary transparency
Determinative Principles
- Professional competence in report writing can be satisfied through thorough post-generation verification of factual claims
- Intellectual honesty in authorship requires that a professional seal represent not merely quality certification but also intellectual ownership and responsible charge over the work's expression
- The absence of a profession-wide framework for AI-assisted authorship leaves the tension between these principles unresolved and renders any conclusion provisional
Determinative Facts
- Engineer A conducted a thorough review of the AI-generated report text, verifying factual claims against professional literature
- The report's prose was substantially composed by a non-human language model, with Engineer A's contribution limited to minor wording adjustments
- No established engineering profession framework exists that defines AI-assisted authorship as a recognized and disclosed mode of professional work product creation
Determinative Principles
- The professional seal legally and ethically certifies that the engineer has exercised responsible charge — understood, directed, and can stand behind the work's technical adequacy
- Competence for sealing AI-generated design documents encompasses not only domain knowledge but also sufficient understanding of the AI tool's outputs to certify their reliability
- A cursory review of output from a novel tool whose generative logic the engineer does not fully understand cannot satisfy the responsible charge standard
Determinative Facts
- Engineer A applied their professional seal to AI-generated design documents after only a cursory, high-level review
- Engineer A had no prior experience with the AI drafting tool and did not fully understand its generative logic
- The subsequent discovery of misaligned dimensions and omitted safety features required by local regulations confirmed that the cursory review was substantively inadequate
Determinative Principles
- Public welfare is paramount and functions as a constraint on all other professional decisions
- AI tool disclosure obligation is conditional on competence and rigor of verification
- Analogy to conventional software tools fails when the tool is novel, unvalidated, and produces safety-critical omissions
Determinative Facts
- AI-generated design documents contained omitted safety features required by local regulations and were submitted under Engineer A's professional seal
- Engineer A had no prior experience with the AI drafting tool and performed only a cursory review
- Client W independently identified the deficiencies — without that intervention, non-compliant documents could have proceeded to construction
Determinative Principles
- Responsible Charge is a substantive duty requiring the engineer to have directed the work, understood its content, and be able to certify technical adequacy — not a procedural formality
- The professional seal is the outward expression of Responsible Charge, and affixing it to inadequately reviewed documents is a categorical violation regardless of intent or outcome
- Competence threshold under II.2.b was not met at the time of sealing due to unfamiliarity with the AI tool's outputs
Determinative Facts
- Engineer A performed only a cursory review of AI-generated design documents that contained safety omissions and dimensional errors
- Engineer A had no prior experience with the AI drafting tool and did not fully understand its generative process
- Client W identified the errors before construction, but the Board held this does not retroactively satisfy the Responsible Charge obligation
Determinative Principles
- Consequentialist risk-benefit analysis requiring pre-action assessment of foreseeable harms
- Client data confidentiality as a professional obligation triggered before data transmission
- Competence obligation to investigate third-party system data handling before use
Determinative Facts
- Open-source AI platforms operate under data handling, retention, and third-party access policies outside the user's control
- Client W's site-specific groundwater data carried potential regulatory, litigation, and competitive sensitivity
- Engineer A was unfamiliar with the AI software's full functionality, including its data handling practices, before uploading client data
Determinative Principles
- Competence encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver adequate-quality work
- Substitution of an untested tool for established professional oversight does not satisfy the competence standard
- An engineer with a recognized weakness in a critical deliverable component bears an independent obligation to arrange alternative peer review
Determinative Facts
- Engineer B's retirement removed the primary quality assurance mechanism Engineer A had structurally depended upon for professional-grade technical writing output
- Engineer A deployed a newly released, unfamiliar open-source AI tool as a replacement for peer review without any independent verification of that tool's reliability
- The engagement was a dual-scope project of meaningful complexity involving a contaminant characterization report and engineering design documents
Decision Points
View ExtractionWhen Client W directly observed that the environmental report appeared to have been written by two different authors, should Engineer A proactively disclose the AI's generative role, or treat the AI as an internal productivity tool and disclose only if directly asked?
- Proactively Disclose AI Role To Client
- Treat AI As Internal Productivity Tool
- Acknowledge Automated Assistance Without Specifics
Should Engineer A conduct a rigorous, line-by-line technical review of the AI-generated design documents before sealing them, or is a standard QA protocol sufficient — and if neither is adequate alone, should Engineer A bring in an independent peer reviewer?
- Conduct Rigorous Line-By-Line Technical Review
- Apply Standard QA Protocol For AI Outputs
- Engage Independent Peer Reviewer For Verification
Should Engineer A obtain Client W's prior informed consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed using technical safeguards or platform substitution without seeking consent?
- Investigate Platform and Obtain Client Consent
- Anonymize Data Before Uploading to Platform
- Substitute Privacy-Compliant Enterprise AI Platform
Should Engineer A proactively disclose the AI tool's generative role to Client W — including which sections it drafted — or treat the AI as an internal drafting tool requiring no special disclosure?
- Disclose AI Authorship Fully and Immediately
- Treat AI as Internal Tool, Omit Disclosure
- Add General Methodology Note, Disclose Only If Asked
Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's prior written consent before uploading confidential site data, or may Engineer A proceed using anonymization or treat the platform as equivalent to local software?
- Investigate Platform and Obtain Written Consent
- Anonymize Data as Confidentiality Safeguard
- Proceed Treating AI as Local Software Equivalent
After losing Engineer B's peer review function, should Engineer A perform a rigorous independent technical review of all AI-generated documents before sealing them, apply the existing QA protocol treating the AI tool as equivalent to conventional drafting software, or engage a third-party AI-experienced reviewer to fill the oversight gap?
- Perform Rigorous Independent Line-By-Line Review
- Apply Standard QA Protocol As-Is
- Engage Third-Party AI-Experienced Reviewer
What standard of review must Engineer A apply to AI-generated design documents before affixing a professional seal, given unfamiliarity with the AI drafting tool and the safety-critical nature of the outputs?
- Conduct Rigorous Line-by-Line Technical Review
- Apply Standard QA Protocol to AI Outputs
- Engage Third-Party Reviewer for Critical Elements
When Client W observed that the report appeared written by two different authors, should Engineer A disclose that AI software drafted the more polished sections, or respond in a way that affirms professional responsibility without identifying the AI's specific role?
- Disclose AI-Drafted Sections To Client
- Affirm Report Reflects Professional Judgment
- Acknowledge Automated Assistance Without Specifics
Before uploading Client W's confidential site data to an open-source AI platform, should Engineer A investigate the platform's data handling practices and obtain Client W's explicit consent, proceed under the existing engagement agreement, or use only anonymized data in the AI tool?
- Investigate Platform And Obtain Informed Consent
- Proceed Under Existing Engagement Agreement
- Use Anonymized Data In AI Tool Inputs
Should Engineer A conduct a rigorous line-by-line technical review of all AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventional drafting tools, or engage a qualified peer reviewer to verify safety-critical elements?
- Conduct Rigorous Line-By-Line Technical Review
- Apply Standard QA Protocol to AI Outputs
- Engage Peer Reviewer for Critical AI Elements
Should Engineer A proactively disclose the AI tool's generative role in response to Client W's authorship observation, or address the concern through explanation or revision without specifically disclosing AI involvement?
- Disclose AI Role and Cite Sources
- Explain Report Reflects Professional Verification
- Revise Prose Without Disclosing AI Involvement
After Engineer B's retirement removed Engineer A's primary quality assurance mechanism, did Engineer A have an independent ethical obligation to arrange a functionally equivalent alternative peer review process before undertaking a complex dual-scope engagement — and did the decision to substitute an open-source AI tool for that oversight independently violate the client data confidentiality obligation by necessarily exposing Client W's proprietary site data to a public platform without prior consent?
- Arrange Alternative Peer Reviewer Before Engaging
- Proceed Relying on Personal Domain Expertise
- Limit Scope to Verified Solo Capabilities
Should Engineer A perform a rigorous, element-by-element technical review of AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventionally drafted documents, or engage a third-party reviewer with AI-specific experience to verify safety-critical elements?
- Perform Rigorous Line-By-Line Technical Review
- Apply Standard QA Protocol As-Is
- Engage Third-Party AI-Experienced Reviewer
After Engineer B's retirement eliminated Engineer A's primary QA resource, should Engineer A arrange a functionally equivalent peer reviewer before proceeding with the Client W engagement, proceed relying on personal domain competence, or disclose the QA gap to Client W and propose a reduced scope?
- Arrange Alternative Qualified Peer Reviewer
- Proceed Relying On Own Domain Competence
- Disclose QA Change And Propose Reduced Scope
Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's explicit consent before uploading confidential site data, or may Engineer A proceed by anonymizing inputs or treating the platform as equivalent to standard third-party engineering software?
- Investigate Platform and Obtain Client Consent
- Use Anonymized Data for AI Assistance
- Treat AI Platform as Standard Third-Party Software
Given that Engineer B's retirement removed Engineer A's primary quality assurance mechanism and that Engineer A had no prior experience with the AI drafting tool, should Engineer A perform a rigorous line-by-line technical review before sealing, apply the standard QA protocol as-is, or engage an independent peer reviewer to verify safety-critical elements?
- Perform Rigorous Line-By-Line Technical Review
- Apply Standard QA Protocol As-Is
- Engage Independent Peer Reviewer Before Sealing
When Client W directly observed that the report appeared to have been written by two different authors — accurately identifying its dual-origin nature — should Engineer A disclose the AI tool's generative role, deflect with a technical explanation, or offer revision without attribution?
- Disclose AI Role Upon Client Observation
- Explain Stylistic Variation as Technical Density
- Offer Prose Revision Without Disclosing AI
Should Engineer A obtain Client W's explicit prior consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed by anonymizing the data or limiting inputs to publicly available information?
- Obtain Explicit Prior Client Consent
- Anonymize Data Before Platform Upload
- Input Only Publicly Available Data to Platform
Case Narrative
Phase 4 narrative construction results for Case 7
Opening Context
You are Engineer A, a licensed environmental engineering consultant retained by Client W to prepare two deliverables: a comprehensive environmental report on an organic contaminant of concern, and engineering design documents for groundwater infrastructure modifications at the same site. Your mentor and longtime quality-assurance reviewer, Engineer B, has recently retired. Without that support, and facing deadline pressure, you have turned to a newly released open-source AI tool to assist with both deliverables. You have no prior experience with this tool, and the platform requires you to upload project data to generate drafts. Client W has not been informed of any of this. The report draft and the preliminary design documents are now ready. How you review, seal, disclose, and deliver these work products will determine whether you meet your professional obligations or fall short of them.
Characters (8)
A licensed professional engineer retained by Client W to prepare a comprehensive environmental report and develop engineering design documents for groundwater infrastructure modifications. Used AI software tools to assist with drafting deliverables but conducted only cursory review before affixing professional seal, resulting in quality deficiencies identified by the client.
- Likely motivated by efficiency and workload management following the loss of mentorship support, prioritizing timely deliverable submission over rigorous professional review and transparency obligations.
- Likely motivated by overconfidence in AI-generated outputs and time pressure, leading to an underestimation of the verification rigor required before affixing a professional seal to design documents.
- Professional obligation to maintain responsible charge and active engagement in the engineering process from conception to completion.
Developed engineering design documents including plans and specifications for groundwater infrastructure modifications using AI-assisted drafting tools; conducted only cursory review resulting in misaligned dimensions and omission of required safety features
A recently retired senior engineer who previously provided essential supervisory oversight and quality assurance that helped maintain Engineer A's professional standards.
- Motivated by a genuine commitment to professional mentorship during active practice, though retirement inadvertently created a critical accountability gap that Engineer A failed to compensate for through alternative oversight measures.
Retained Engineer A for environmental contaminant reporting and groundwater infrastructure design; reviewed deliverables, identified quality inconsistencies in the report and critical deficiencies in the design documents, and instructed Engineer A to revise plans to meet professional and regulatory standards
Used AI language processing software to draft an environmental groundwater monitoring report and AI-assisted drafting tools to prepare design documents; performed insufficient review of AI-generated design outputs resulting in misaligned dimensions and omitted safety features; uploaded client confidential information to a public AI interface without client consent; failed to include appropriate citations for AI-generated content.
Bore statutory responsible charge obligations over the groundwater monitoring report and design documents; failed to maintain active engagement in the design and development process by relying on AI-generated plans without comprehensive verification; did not satisfy responsible charge requirements by conducting only a high-level post-preparation review.
Retained Engineer A for environmental consulting and design services; reviewed AI-assisted design documents and identified misaligned dimensions and omitted safety features; questioned inconsistencies in the report; held confidentiality interests in information uploaded to public AI systems without consent.
Senior engineer whose absence from the project left Engineer A without proper oversight and mentorship support, contributing to Engineer A operating in a compromised manner and relying excessively on AI-generated outputs without adequate verification.
States (10)
Event Timeline (35)
| # | Event | Type |
|---|---|---|
| 1 | The case centers on an engineering firm where AI-generated design documents and reports were produced under conditions that did not meet state engineering standards and regulations. This foundational context sets the stage for a series of professional and ethical decisions that would ultimately raise serious questions about competence, transparency, and public safety. | state |
| 2 | The engineer made a deliberate decision to use an AI tool to assist in drafting a professional engineering report, rather than relying solely on traditional methods. This choice introduced new risks around accountability and professional responsibility, as the engineer retained full legal and ethical obligation for the accuracy of the final work product. | action |
| 3 | In the process of using the AI tool, the engineer entered sensitive and proprietary client data into a publicly accessible AI platform not approved for confidential information. This action potentially exposed protected client information to unauthorized parties, constituting a serious breach of professional confidentiality obligations. | action |
| 4 | Before submission, the engineer conducted a careful and comprehensive review of the AI-generated report to verify its technical accuracy and completeness. This diligent review represented a critical step in exercising professional judgment and fulfilling the engineer's duty to ensure the integrity of work bearing their seal. | action |
| 5 | The engineer submitted the completed report to the client without disclosing that AI tools had been used in its preparation. This omission raised significant ethical concerns regarding transparency and honesty, as clients and regulatory bodies may have a legitimate interest in knowing how engineering work products are generated. | action |
| 6 | The engineer extended their use of AI beyond report writing by also employing it to generate formal engineering design documents. This escalation increased the ethical and legal stakes considerably, as design documents carry direct implications for public health, safety, and welfare. | action |
| 7 | Unlike the thorough review applied to the report, the engineer performed only a superficial review of the AI-generated design documents before approving them. This cursory oversight failed to meet the standard of care expected of a licensed professional engineer and left potentially critical errors undetected. | action |
| 8 | Engineer B, a senior colleague who may have provided oversight or mentorship within the firm, retired during this period. This departure is significant because it may have removed an experienced check on the engineer's work, potentially contributing to the lapse in professional standards that followed. | automatic |
| 9 | Client W Engagement Established | automatic |
| 10 | Confidential Data Exposed to AI | automatic |
| 11 | AI Report Draft Generated | automatic |
| 12 | AI Design Documents Generated | automatic |
| 13 | Report Stylistic Inconsistency Detected | automatic |
| 14 | Design Document Defects Discovered | automatic |
| 15 | Tension between AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W and AI-Generated Work Product Disclosure Constraint Engineer A Report Submission | automatic |
| 16 | Tension between AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W and Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool | automatic |
| 17 | Did Engineer A have an ethical obligation to disclose the AI's generative role in drafting the environmental report to Client W — both at submission and upon Client W's direct observation of stylistic inconsistency — and does submitting AI-generated prose with only minor wording edits under a professional seal without attribution constitute a breach of intellectual authorship integrity and candor? | decision |
| 18 | Did Engineer A satisfy the Responsible Charge and competence standards by conducting only a cursory, high-level review of AI-generated design documents produced by a novel, unfamiliar drafting tool before affixing a professional seal, given that the review failed to detect misaligned dimensions and safety features required by local regulations? | decision |
| 19 | Did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, and does this breach stand as a self-contained ethical violation regardless of the accuracy or quality of the resulting work products? | decision |
| 20 | Did Engineer A have an ethical obligation to proactively disclose the use of AI tools to Client W when submitting AI-generated work products, and did silence in the face of Client W's direct observation about stylistic inconsistency constitute a deceptive act? | decision |
| 21 | Did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's confidential site data and groundwater monitoring information to an open-source AI platform without obtaining prior consent, and does this constitute a discrete ethical breach separate from any question about AI disclosure or work product quality? | decision |
| 22 | Did Engineer A satisfy the Responsible Charge standard and competence obligation under Code provisions II.2.a and II.2.b by applying only a cursory, high-level review to AI-generated engineering design documents before affixing their professional seal, given that the documents contained safety omissions and dimensional errors that the review failed to detect? | decision |
| 23 | Should Engineer A fulfill the Intellectual Authorship Integrity Obligation and the AI-Assisted Design Comprehensive Verification Obligation by conducting thorough, proportionate review of AI-generated work products before sealing and submitting them to Client W, given that the report received a thorough review while the design documents received only a cursory high-level check? | decision |
| 24 | Should Engineer A fulfill the Proactive AI Disclosure to Client Obligation by disclosing the use of AI tools to Client W — particularly at the moment Client W directly observed that the report appeared to have been written by two different authors — or does silence in that moment constitute a deceptive act under Code provisions I.5 and III.3? | decision |
| 25 | Should Engineer A fulfill the Client Data Confidentiality Obligation and the Peer Review Succession Obligation by obtaining Client W's prior consent before uploading confidential site data to an open-source AI platform, and by arranging an alternative qualified peer review mechanism to replace Engineer B's oversight before undertaking a complex dual-scope engagement — rather than substituting an unfamiliar open-source AI tool for both functions? | decision |
| 26 | Should Engineer B (as the mentor/quality assurance figure whose retirement precipitated Engineer A's AI over-reliance) have fulfilled the Responsible Charge Active Review Obligation and AI-Generated Work Product Competence Verification Obligation by ensuring continuity of oversight before retiring, and does Engineer A's subsequent cursory review of AI-generated design documents constitute a categorical breach of responsible charge? | decision |
| 27 | Should Engineer B (as the departing mentor) have fulfilled the Mentorship Succession and Peer Review Continuity Obligation by arranging or facilitating alternative peer review mechanisms for Engineer A before retiring, and does Engineer A bear an independent obligation to arrange such alternatives rather than substituting an unfamiliar AI tool for professional oversight? | decision |
| 28 | Should Engineer B (as the quality assurance anchor for Engineer A's practice) have fulfilled the AI-Generated Work Product Competence Verification Obligation and Regulatory Compliance Verification Obligation by ensuring Engineer A possessed sufficient competence with the AI tool and applied adequate verification rigor before sealing outputs, and does Engineer A's failure to do so — combined with silence when Client W identified stylistic inconsistency — constitute independent ethical violations of candor and competence? | decision |
| 29 | Should Engineer A conduct a rigorous, line-by-line technical review of AI-generated design documents sufficient to detect safety omissions and dimensional errors before affixing a professional seal, rather than relying on a cursory high-level check? | decision |
| 30 | Should Engineer A verify sufficient competence with a novel AI drafting tool and disclose its use to Client W — particularly when client-observable anomalies arise and when confidential client data is necessarily transmitted to a public platform — as preconditions for ethically permissible AI-assisted work product submission? | decision |
| 31 | Should Engineer A arrange a functionally equivalent alternative peer review mechanism — and select a confidentiality-compliant AI tool — before undertaking a complex dual-scope engagement after losing the primary quality assurance resource provided by Engineer B, rather than substituting an unfamiliar open-source AI tool for that professional oversight? | decision |
| 32 | Should Engineer A apply a rigorous, line-by-line technical review to AI-generated work products before affixing a professional seal, or is a high-level cursory review sufficient to satisfy the Responsible Charge standard when AI-assisted drafting tools are used? | decision |
| 33 | Should Engineer A have assessed their own competence with a novel AI drafting tool — including its capabilities, limitations, and failure modes — before deploying it for safety-critical engineering design documents, or was domain expertise in the subject matter sufficient to satisfy the competence standard for AI-assisted work? | decision |
| 34 | Should Engineer A have obtained Client W's prior informed consent before uploading confidential site data and groundwater monitoring information to an open-source AI platform, and independently arranged alternative peer review after Engineer B's retirement, rather than proceeding without either safeguard? | decision |
| 35 | Engineer A's use of AI in report writing was partly ethical, and partly unethical. | outcome |
Decision Moments (18)
- Disclose AI tool's generative role in the report to Client W at submission and clarify AI authorship when Client W raises the stylistic inconsistency observation Actual outcome
- Submit the AI-generated report under professional seal without disclosing AI involvement and remain silent when Client W observes the stylistic inconsistency
- Conduct a rigorous, line-by-line technical review of all AI-generated design documents — verifying each dimension, safety feature, and regulatory compliance requirement — before affixing a professional seal, and arrange alternative qualified peer review to compensate for Engineer B's absence Actual outcome
- Seal and submit AI-generated design documents after only a cursory high-level review, relying on the AI tool's output without independent verification of dimensions, safety features, or regulatory compliance
- Obtain Client W's prior informed consent before uploading confidential site data to the open-source AI platform, and investigate the platform's data handling and privacy policies before any client data transmission Actual outcome
- Upload Client W's confidential site data and groundwater monitoring information into the open-source AI platform without obtaining prior consent or investigating the platform's data handling practices
- Proactively disclose AI tool usage and AI-generated sections to Client W before or upon submission, and clarify AI's role when Client W raises the stylistic inconsistency observation Actual outcome
- Submit AI-generated work products without disclosure and remain silent when Client W observes the stylistic inconsistency, treating AI as an internal drafting tool equivalent to other engineering software
- Investigate the open-source AI platform's data handling and privacy policies before use, obtain Client W's explicit prior consent for uploading confidential site data, and identify a privacy-compliant alternative if consent is withheld Actual outcome
- Upload Client W's confidential site data and groundwater monitoring information to the open-source AI platform without prior investigation of data handling practices and without obtaining Client W's consent
- Conduct a rigorous, line-by-line technical review of all AI-generated design documents — verifying each dimension against site survey data, each specification against local regulatory requirements, and confirming the presence of all required safety features — before affixing the professional seal Actual outcome
- Apply a cursory, high-level review to AI-generated design documents and affix the professional seal without verifying dimensional accuracy, regulatory compliance, or the presence of required safety features
- Conduct rigorous, line-by-line technical verification of all AI-generated work products — proportionate to tool novelty and safety-criticality — before affixing professional seal, and attribute AI generative contributions in the work product Actual outcome
- Apply a high-level cursory review to AI-generated design documents and seal them without attribution, treating AI output as equivalent to conventional engineering software output
- Proactively disclose AI tool usage and identify AI-generated sections to Client W — particularly upon Client W's direct observation of stylistic inconsistency — and attribute AI generative contributions in both the report and design documents Actual outcome
- Remain silent about AI tool usage when Client W raises the stylistic inconsistency observation, treating AI as an undisclosed internal drafting mechanism equivalent to conventional engineering software
- Obtain Client W's prior informed consent before uploading confidential site data to any third-party AI platform, investigate the platform's data handling and privacy policies before use, and arrange an alternative qualified peer reviewer or privacy-compliant AI tool to replace Engineer B's oversight before accepting the dual-scope engagement Actual outcome
- Upload confidential client data to the open-source AI platform without prior consent and proceed with the engagement using the AI tool as a substitute for Engineer B's peer review oversight, treating the efficiency benefit as sufficient justification
- Conduct rigorous line-by-line technical review of all AI-generated design documents, verifying each dimension against site survey data and each specification against local regulatory requirements, before affixing professional seal Actual outcome
- Apply cursory high-level review of AI-generated design documents and affix professional seal without verifying regulatory compliance or dimensional accuracy against site-specific requirements
- Arrange alternative qualified peer review mechanism (qualified colleague, professional review service, or subconsultant) before accepting the dual-scope engagement following Engineer B's retirement Actual outcome
- Proceed with the engagement by substituting a newly released open-source AI tool for Engineer B's expert review without arranging any alternative human oversight mechanism
- Disclose AI tool's generative role to Client W when Client W raises the stylistic inconsistency observation, cite journal articles used to cross-check AI content, and verify all AI-generated design outputs against local regulatory requirements before sealing Actual outcome
- Remain silent about AI's generative role when Client W raises the stylistic inconsistency, omit citations to verification sources, and seal design documents after cursory review without verifying regulatory compliance
- Conduct rigorous line-by-line technical review of AI-generated design documents verifying each dimension, specification, and safety feature against site data and local regulatory requirements before sealing Actual outcome
- Perform cursory high-level review of AI-generated design documents and affix professional seal without verifying individual dimensions, specifications, or regulatory safety feature compliance
- Verify competence with the AI tool before deployment, disclose AI use to Client W when client-observable anomalies arise or safety-critical outputs are involved, and cite sources used to cross-check AI-generated content Actual outcome
- Deploy novel AI tool without prior competence verification, remain silent about AI authorship when client raises stylistic concerns, and submit work products without attribution or citation of cross-checking sources
- Arrange alternative qualified peer review before accepting the engagement, select a privacy-compliant AI tool with contractual data protection guarantees or obtain Client W's explicit consent before uploading confidential data, and scope the engagement to match verified professional infrastructure Actual outcome
- Proceed with the engagement by substituting an unfamiliar open-source AI tool for Engineer B's peer review function and upload confidential client data to the public platform without obtaining prior consent or investigating its data handling practices
- Apply rigorous, line-by-line technical review of all AI-generated work products before sealing, verifying each dimension, specification, and safety feature against regulatory requirements and site data Actual outcome
- Conduct a high-level cursory review of AI-generated design documents before sealing, relying on the AI tool's output quality without independently verifying each technical element
- Assess competence with the novel AI tool before deployment, investigate its capabilities and failure modes, and arrange alternative qualified peer review to compensate for the loss of Engineer B's oversight before undertaking the engagement Actual outcome
- Deploy the novel AI drafting tool relying on existing domain expertise in groundwater infrastructure design as sufficient competence, without separately investigating the tool's limitations or arranging alternative peer review
- Obtain Client W's prior informed consent before uploading any confidential site data to the AI platform, investigate the platform's data handling and privacy policies, and arrange alternative qualified peer review to replace Engineer B's oversight function before accepting the engagement Actual outcome
- Upload confidential client data to the open-source AI platform without prior consent and proceed without arranging alternative peer review, relying on the AI tool as a substitute for Engineer B's quality assurance function
Sequential action-event relationships. See Analysis tab for action-obligation links.
- Chose AI for Report Drafting Input Confidential Data into Public AI
- Input Confidential Data into Public AI Conducted Thorough Report Review
- Conducted Thorough Report Review Submitted Report Without AI Disclosure
- Submitted Report Without AI Disclosure Used AI for Design Document Generation
- Used AI for Design Document Generation Conducted Cursory Design Document Review
- Conducted Cursory Design Document Review Engineer B Retirement Occurs
- conflict_1 decision_1
- conflict_1 decision_2
- conflict_1 decision_3
- conflict_1 decision_4
- conflict_1 decision_5
- conflict_1 decision_6
- conflict_1 decision_7
- conflict_1 decision_8
- conflict_1 decision_9
- conflict_1 decision_10
- conflict_1 decision_11
- conflict_1 decision_12
- conflict_1 decision_13
- conflict_1 decision_14
- conflict_1 decision_15
- conflict_1 decision_16
- conflict_1 decision_17
- conflict_1 decision_18
- conflict_2 decision_1
- conflict_2 decision_2
- conflict_2 decision_3
- conflict_2 decision_4
- conflict_2 decision_5
- conflict_2 decision_6
- conflict_2 decision_7
- conflict_2 decision_8
- conflict_2 decision_9
- conflict_2 decision_10
- conflict_2 decision_11
- conflict_2 decision_12
- conflict_2 decision_13
- conflict_2 decision_14
- conflict_2 decision_15
- conflict_2 decision_16
- conflict_2 decision_17
- conflict_2 decision_18
Key Takeaways
- Engineers must proactively disclose AI tool usage to clients, as failure to do so violates transparency obligations even when the final work product meets technical standards.
- Uploading confidential client data to open-source or third-party AI platforms without explicit client consent constitutes a breach of confidentiality obligations regardless of the engineer's intent or the quality of output produced.
- Adopting novel tools like AI drafting assistants requires engineers to first verify their own competence in critically evaluating AI-generated outputs before incorporating them into professional deliverables.