Step 4: Full View
Entities, provisions, decisions, and narrative
Full Entity Graph
Loading...Entity Types
Synthesis Reasoning Flow
Shows how NSPE provisions inform questions and conclusions - the board's reasoning chainThe board's deliberative chain: which code provisions informed which ethical questions, and how those questions were resolved. Toggle "Show Entities" to see which entities each provision applies to.
NSPE Code Provisions Referenced
Section I. Fundamental Canons 3 97 entities
Hold paramount the safety, health, and welfare of the public.
Perform services only in areas of their competence.
Avoid deceptive acts.
Section II. Rules of Practice 3 83 entities
Engineers shall not reveal facts, data, or information without the prior consent of the client or employer except as authorized or required by law or this Code.
Engineers shall undertake assignments only when qualified by education or experience in the specific technical fields involved.
Engineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.
Section III. Professional Obligations 3 41 entities
Engineers shall avoid all conduct or practice that deceives the public.
Engineers shall conform with state registration laws in the practice of engineering.
Engineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.
Cross-Case Connections
View ExtractionExplicit Board-Cited Precedents 2 Lineage Graph
Cases explicitly cited by the Board in this opinion. These represent direct expert judgment about intertextual relevance.
Principle Established:
It is unethical for an engineer to offer services using new technology in areas where they lack competence and experience; technology has an important place in engineering practice but must never be a replacement or substitute for engineering judgment.
Citation Context:
The Board cited this case to establish that technology must never replace or substitute for engineering judgment, and to draw a parallel to Engineer A's insufficient review of AI-generated design documents, while also distinguishing Engineer A's situation by noting Engineer A is not incompetent unlike the engineer in that case.
Principle Established:
It is ethical for an engineer to sign and seal documents created using a CADD system, whether prepared by the engineer themselves or by others working under their direction and control, provided the engineer has the requisite background, education, and training to be proficient with the technology and its limitations.
Citation Context:
The Board cited this case to establish historical precedent for the ethical use of computer-assisted drafting and design tools, and to show the BER's longstanding openness to new technologies in engineering practice, including early anticipation of AI.
Implicit Similar Cases 10 Similarity Network
Cases sharing ontology classes or structural similarity. These connections arise from constrained extraction against a shared vocabulary.
Questions & Conclusions
View ExtractionWas Engineer A’s use of AI to create the report text ethical, given that Engineer A thoroughly checked the report?
Engineer A's use of AI in report writing was partly ethical, and partly unethical.
Was Engineer A’s use of AI-assisted drafting tools to create the engineering design documents ethical, given that Engineer A reviewed the design at a high level?
The use of AI-assisted drafting tools by Engineer A was not unethical per se.
The Board's conclusion that AI-assisted drafting tools are not unethical per se must be qualified by a competence threshold that Engineer A did not meet with respect to the design documents. Code provisions I.2 and II.2.a require that engineers perform services only within areas of their competence, and this obligation extends to the tools they deploy. When an engineer uses a novel, unfamiliar AI drafting tool - one newly released to market with no prior experience on the engineer's part - and then conducts only a cursory, high-level review of its outputs before sealing and submitting engineering design documents, the engineer has not satisfied the competence standard that makes AI tool use ethically permissible in the first place. The Board's permissive conclusion about AI drafting tools implicitly assumes that the engineer possesses sufficient understanding of the tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Engineer A lacked that understanding entirely. The resulting design documents contained misaligned dimensions and omitted safety features required by local regulations - defects that a competent, engaged review would have identified. Accordingly, the ethical permissibility of AI-assisted drafting tools is conditional, not categorical: it depends on whether the engineer has sufficient competence with the tool and applies sufficient verification rigor to maintain genuine responsible charge over the work product.
The Board's finding that Engineer A's use of AI was partly unethical with respect to the design documents is further supported by the public safety dimension that the Board did not fully develop. Code provision I.1 places the safety, health, and welfare of the public as the paramount obligation of a licensed engineer, and this obligation is not merely aspirational - it is the foundational constraint against which all other professional judgments must be measured. The AI-generated design documents submitted by Engineer A contained omitted safety features required by local regulations. These omissions were not caught by Engineer A's cursory review and were only identified by Client W. Had Client W not conducted an independent technical review, those deficient documents could have proceeded to construction, creating a direct risk to public safety. The fact that the error was caught before construction does not retroactively satisfy the responsible charge standard; the standard requires that the engineer's own review be sufficient to ensure compliance, not that a client's independent review serve as the final safety check. Engineer A's sealing of documents containing regulatory safety omissions - after only a cursory review - therefore implicates not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of Code provision I.1. The ethical violation in the design phase is accordingly more serious than a mere procedural lapse in review thoroughness: it represents a failure of the core public protection function that professional licensure exists to serve.
If the use of AI was acceptable, did Engineer A have an ethical obligation to disclose the use of AI in any form to the Client?
Similar to other software used in the design or detailing process, Engineer A has no professional or ethical obligation to disclose AI use to Client W (unless such disclosure is required under Engineer A’s contract with Client W).
The Board's conclusion that Engineer A has no universal ethical obligation to disclose AI use to Client W - analogizing AI tools to other engineering software - requires significant qualification in light of the specific facts of this case and must not be read as a blanket rule. The analogy to conventional engineering software breaks down in at least three respects. First, conventional design software such as CAD or finite element analysis tools operates deterministically on engineer-supplied inputs and produces outputs the engineer can fully audit; large language model AI generates probabilistic, non-deterministic text and design content whose provenance and accuracy the engineer cannot fully trace or verify. Second, the observable stylistic discontinuity in the report - which Client W independently detected, noting it read as if written by two different authors - created an implicit misrepresentation about the nature of the work product and its authorship. At the moment Client W raised that observation, Engineer A's silence became an act of omission that a reasonable client would regard as misleading, implicating Code provisions I.5 and III.3. Third, the design document defects - misaligned dimensions and omitted safety features - demonstrate that undisclosed AI-generated outputs in this case did reach a client and could have proceeded to construction without correction absent Client W's independent review. The Board's no-disclosure-obligation conclusion is therefore defensible only in circumstances where the engineer has exercised thorough, competent review of AI outputs and where no client inquiry or observable anomaly has created an affirmative duty to speak. In this case, neither condition was fully satisfied for the design documents, and the stylistic anomaly in the report created a specific moment at which silence was ethically problematic.
By uploading Client W's confidential site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c, and does this violation stand as a separate ethical breach from any question about AI disclosure or report quality?
Beyond the Board's finding that Engineer A's use of AI in report writing was partly ethical and partly unethical, a critical and independent ethical breach exists that the Board did not explicitly address: Engineer A violated the client confidentiality obligation by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining Client W's prior consent. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating a foreseeable risk of disclosure beyond Engineer A's control. This breach of Code provision II.1.c stands entirely apart from questions about report quality, AI disclosure, or design document accuracy. A competent engineer deploying any third-party software tool - particularly a newly released, open-source platform with unknown data handling practices - bears an independent obligation to evaluate whether inputting confidential client data is permissible under the client relationship before acting. Engineer A's failure to seek Client W's consent before uploading that data constitutes a separate and self-standing ethical violation that the Board's analysis of report quality and AI transparency does not cure or subsume.
In response to Q101: Engineer A's upload of Client W's confidential site data and groundwater monitoring information into an open-source AI platform constitutes an independent and discrete ethical violation of Code provision II.1.c, entirely separate from any question about report quality or AI disclosure. The confidentiality obligation is not contingent on whether the resulting work product is accurate, polished, or ultimately beneficial to the client. By inputting proprietary client data into a publicly accessible AI system without obtaining Client W's prior consent, Engineer A exposed that information to potential third-party access, retention, or reuse by the AI platform - consequences Engineer A could not control or fully anticipate, particularly given their admitted unfamiliarity with the software. This breach stands on its own ethical foundation: the harm is the unauthorized exposure itself, not merely any downstream misuse. A competent engineer deploying a novel open-source tool with client data bears an affirmative obligation to investigate the data handling, storage, and privacy policies of that tool before use, and to obtain explicit client consent if any confidential information will be transmitted to a third-party system. Engineer A did neither. This violation is not remediated by the thoroughness of the subsequent report review, by the accuracy of the final work product, or by any disclosure or non-disclosure decision regarding AI authorship.
Given that Engineer B's retirement removed the primary quality assurance mechanism Engineer A had relied upon, did Engineer A have an independent ethical obligation to arrange an alternative peer review process before undertaking a complex, dual-scope engagement involving an unfamiliar AI tool, rather than substituting AI-generated output for that professional oversight?
The Board's conclusion that AI-assisted drafting tools are not unethical per se must be qualified by a competence threshold that Engineer A did not meet with respect to the design documents. Code provisions I.2 and II.2.a require that engineers perform services only within areas of their competence, and this obligation extends to the tools they deploy. When an engineer uses a novel, unfamiliar AI drafting tool - one newly released to market with no prior experience on the engineer's part - and then conducts only a cursory, high-level review of its outputs before sealing and submitting engineering design documents, the engineer has not satisfied the competence standard that makes AI tool use ethically permissible in the first place. The Board's permissive conclusion about AI drafting tools implicitly assumes that the engineer possesses sufficient understanding of the tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Engineer A lacked that understanding entirely. The resulting design documents contained misaligned dimensions and omitted safety features required by local regulations - defects that a competent, engaged review would have identified. Accordingly, the ethical permissibility of AI-assisted drafting tools is conditional, not categorical: it depends on whether the engineer has sufficient competence with the tool and applies sufficient verification rigor to maintain genuine responsible charge over the work product.
The Board's analysis does not address a systemic professional vulnerability exposed by this case: Engineer A's over-reliance on AI tools was directly precipitated by the absence of the peer review and mentorship previously provided by Engineer B. When Engineer B retired, Engineer A lost not merely editorial guidance on technical writing but a substantive quality assurance mechanism that had been integral to Engineer A's professional practice. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, a professional review service, or a subconsultant - Engineer A substituted an unfamiliar AI tool for that oversight function. This substitution was ethically inadequate for two independent reasons. First, AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. Second, the substitution required uploading confidential client data to an open-source platform, compounding the ethical problem. Code provision II.2.a's competence obligation and the broader duty of diligence implicit in responsible charge together suggest that when an engineer's established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, high-stakes engagements - not to proceed with an untested technological substitute. The NSPE Code of Ethics does not currently provide explicit guidance on peer review succession planning, and this case illustrates that such guidance would meaningfully serve the profession.
The Board's finding that Engineer A's use of AI was partly unethical with respect to the design documents is further supported by the public safety dimension that the Board did not fully develop. Code provision I.1 places the safety, health, and welfare of the public as the paramount obligation of a licensed engineer, and this obligation is not merely aspirational - it is the foundational constraint against which all other professional judgments must be measured. The AI-generated design documents submitted by Engineer A contained omitted safety features required by local regulations. These omissions were not caught by Engineer A's cursory review and were only identified by Client W. Had Client W not conducted an independent technical review, those deficient documents could have proceeded to construction, creating a direct risk to public safety. The fact that the error was caught before construction does not retroactively satisfy the responsible charge standard; the standard requires that the engineer's own review be sufficient to ensure compliance, not that a client's independent review serve as the final safety check. Engineer A's sealing of documents containing regulatory safety omissions - after only a cursory review - therefore implicates not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of Code provision I.1. The ethical violation in the design phase is accordingly more serious than a mere procedural lapse in review thoroughness: it represents a failure of the core public protection function that professional licensure exists to serve.
In response to Q102: Engineer B's retirement did not merely create an inconvenience for Engineer A - it removed the primary quality assurance mechanism upon which Engineer A had structurally depended for professional-grade output, particularly in technical writing. When that mechanism was removed, Engineer A faced a dual-scope engagement of meaningful complexity: a comprehensive contaminant characterization report requiring synthesis of groundwater monitoring data, and engineering design documents for infrastructure modifications. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, contracting a third-party reviewer, or consulting with a professional organization - Engineer A substituted an unfamiliar, newly released open-source AI tool for that professional oversight. This substitution was not ethically neutral. The NSPE Code's competence provisions (I.2 and II.2.a) require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. An engineer who knows they have a recognized weakness in a critical deliverable component, who has lost their primary quality assurance resource, and who then deploys an untested tool as a replacement - without any independent verification of that tool's reliability - has not satisfied the competence standard. Engineer A had an independent ethical obligation to arrange alternative peer review before proceeding, and the failure to do so compounded every subsequent deficiency in both the report and the design documents.
When Client W observed that the report read as if written by two different authors, did Engineer A incur an immediate ethical obligation to proactively disclose the AI's role in drafting the more polished sections, or was silence in that moment itself a deceptive act under Code provisions I.5 and III.3?
The Board's conclusion that Engineer A has no universal ethical obligation to disclose AI use to Client W - analogizing AI tools to other engineering software - requires significant qualification in light of the specific facts of this case and must not be read as a blanket rule. The analogy to conventional engineering software breaks down in at least three respects. First, conventional design software such as CAD or finite element analysis tools operates deterministically on engineer-supplied inputs and produces outputs the engineer can fully audit; large language model AI generates probabilistic, non-deterministic text and design content whose provenance and accuracy the engineer cannot fully trace or verify. Second, the observable stylistic discontinuity in the report - which Client W independently detected, noting it read as if written by two different authors - created an implicit misrepresentation about the nature of the work product and its authorship. At the moment Client W raised that observation, Engineer A's silence became an act of omission that a reasonable client would regard as misleading, implicating Code provisions I.5 and III.3. Third, the design document defects - misaligned dimensions and omitted safety features - demonstrate that undisclosed AI-generated outputs in this case did reach a client and could have proceeded to construction without correction absent Client W's independent review. The Board's no-disclosure-obligation conclusion is therefore defensible only in circumstances where the engineer has exercised thorough, competent review of AI outputs and where no client inquiry or observable anomaly has created an affirmative duty to speak. In this case, neither condition was fully satisfied for the design documents, and the stylistic anomaly in the report created a specific moment at which silence was ethically problematic.
In response to Q103: When Client W directly observed that the report appeared to have been written by two different authors - a stylistically inconsistent observation that was, in fact, an accurate description of the report's dual-origin nature - Engineer A's silence in that moment was not ethically neutral. Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives the public or clients. Deception does not require an affirmative false statement; it can arise from deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission creates or sustains a materially false impression. Client W's comment was a direct, specific observation that implicitly invited clarification about the report's authorship. A client who is told their report reads as if written by two people is, in practical terms, asking why. Engineer A's failure to respond honestly - by acknowledging that AI software had generated the more polished sections - allowed Client W to proceed under the false impression that the entire report was the product of Engineer A's own professional authorship. This silence, in context, constitutes a deceptive act under I.5 and conduct that deceives under III.3, independent of whether disclosure was required before submission. The moment of Client W's observation created a discrete, time-specific obligation to clarify, and Engineer A's failure to do so transformed a prior omission into an active, ongoing misrepresentation.
Does Engineer A's failure to include citations to the professional journal articles used to cross-check AI-generated content constitute a violation of the obligation to give credit for engineering work under Code provision III.9, and does it additionally undermine the evidentiary foundation of a technical report that may inform regulatory or remediation decisions?
The Board's analysis does not address a systemic professional vulnerability exposed by this case: Engineer A's over-reliance on AI tools was directly precipitated by the absence of the peer review and mentorship previously provided by Engineer B. When Engineer B retired, Engineer A lost not merely editorial guidance on technical writing but a substantive quality assurance mechanism that had been integral to Engineer A's professional practice. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, a professional review service, or a subconsultant - Engineer A substituted an unfamiliar AI tool for that oversight function. This substitution was ethically inadequate for two independent reasons. First, AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. Second, the substitution required uploading confidential client data to an open-source platform, compounding the ethical problem. Code provision II.2.a's competence obligation and the broader duty of diligence implicit in responsible charge together suggest that when an engineer's established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, high-stakes engagements - not to proceed with an untested technological substitute. The NSPE Code of Ethics does not currently provide explicit guidance on peer review succession planning, and this case illustrates that such guidance would meaningfully serve the profession.
Engineer A's failure to cite the professional journal articles used to cross-check AI-generated content, and the absence of any attribution for the AI-generated text itself, raises an underexamined concern about the evidentiary integrity of a technical report that may inform regulatory decisions or remediation actions. Code provision III.9 requires engineers to give credit for engineering work to those to whom credit is due. While this provision is most commonly applied to prevent engineers from claiming credit for others' work, it also carries an affirmative dimension: a technical report submitted in a professional capacity implicitly represents that its intellectual content reflects the engineer's own analysis and judgment. Where substantial portions of the report's prose and synthesis were generated by an AI system, and where the factual cross-checking relied on professional journal articles that are not cited, the report's evidentiary foundation is obscured. Regulators, future engineers, or legal proceedings relying on the report cannot assess the quality of the underlying analysis, trace its sources, or evaluate the reliability of the AI-generated synthesis. This is particularly consequential for a report addressing an emerging contaminant of concern, where the scientific basis for conclusions may be contested and where the report may serve as a foundational document for remediation planning or regulatory compliance. The absence of attribution and citation therefore undermines not only intellectual honesty in authorship but also the professional reliability and traceability of the work product itself.
In response to Q104: Engineer A's failure to cite the professional journal articles used to cross-check AI-generated content raises a concern under Code provision III.9, which requires engineers to give credit for engineering work to those to whom credit is due. While III.9 is most commonly applied to crediting the work of other engineers, its underlying principle - that the intellectual and evidentiary foundations of professional work must be honestly attributed - extends to the sources that substantiate technical conclusions. In a report that may inform regulatory decisions or remediation actions affecting public health and environmental safety, the absence of citations to the scientific literature used to verify AI-generated claims is not merely a stylistic deficiency. It deprives Client W, regulators, and any subsequent reviewers of the ability to independently assess the evidentiary basis for the report's conclusions, to identify the scope and currency of the literature consulted, and to evaluate whether the cross-checking process was adequate. This omission undermines the epistemic integrity of the report as a professional document. Furthermore, in the context of an emerging contaminant of concern - a category of substance where scientific understanding is actively evolving - the failure to ground conclusions in cited, verifiable sources creates a foreseeable risk that outdated, incomplete, or AI-hallucinated information could go undetected by downstream users who rely on the report's apparent professional authority.
Does the principle of Professional Competence Satisfied for Report Writing conflict with the principle of Intellectual Honesty in Authorship when Engineer A's thorough factual verification of AI-generated text is used to justify sealing a report whose prose was substantially composed by a non-human system, potentially misrepresenting the nature and origin of the professional work product to Client W?
In response to Q201: A genuine tension exists between the principle that professional competence in report writing can be satisfied through thorough post-generation verification and the principle of intellectual honesty in authorship. The Board concluded that Engineer A's thorough review of the AI-generated report text was sufficient to render that use of AI ethical. However, this conclusion does not fully resolve the authorship integrity question. When an engineer applies their professional seal to a document, they represent to the client and to the public that the work reflects their professional judgment, expertise, and authorship. The seal is not merely a quality certification - it is an assertion of intellectual ownership and responsible charge. A report whose prose was substantially composed by a non-human language model, and whose authorship was personalized only through minor wording adjustments, does not straightforwardly satisfy that representation, even if every factual claim has been verified. The verification process confirms accuracy; it does not transform AI-generated prose into the engineer's own professional expression. These two principles can be reconciled only if the engineering profession explicitly adopts a framework - which it has not yet done - that defines AI-assisted authorship as a recognized and disclosed mode of professional work product creation. Absent such a framework, the tension remains unresolved, and the Board's conclusion on report ethics should be understood as provisional rather than definitive.
The tension between Professional Competence Satisfied for Report Writing and Intellectual Honesty in Authorship was left substantively unresolved by the Board. The Board accepted that Engineer A's thorough factual verification of AI-generated text satisfied the competence dimension of responsible charge for the report, but it did not squarely confront the authorship dimension: when an engineer personalizes AI-generated prose with only minor wording adjustments and submits it under a professional seal without attribution, the seal implicitly represents that the engineer is the intellectual author of the work product. These two principles pull in opposite directions - competence review can be satisfied by rigorous fact-checking, but intellectual honesty in authorship requires that the origin of the substantive prose be accurately represented. The case teaches that competence and authorship are distinct professional obligations, and that satisfying one does not discharge the other. A fully ethical resolution would have required Engineer A to either disclose the AI's generative role or to rewrite the report in their own voice after verification, rather than treating minor wording edits as sufficient to claim authorship.
Does the principle of Responsible Charge Engagement conflict with the principle of Competence Assurance Under Novel Tool Adoption when an engineer applies their professional seal to AI-generated design documents after only a cursory review, given that the seal legally certifies personal responsible charge over work whose generative process the engineer does not fully understand?
In response to Q202: The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption is not merely theoretical - it is demonstrated concretely by the outcome in this case. Engineer A applied their professional seal to AI-generated design documents after only a cursory, high-level review. The professional seal carries a legal and ethical certification that the engineer has exercised responsible charge over the work: that they understand its content, have directed its preparation, and can stand behind its technical adequacy. A cursory review of output generated by a novel AI drafting tool - one with which Engineer A had no prior experience and whose generative logic Engineer A did not fully understand - cannot satisfy that standard. The subsequent discovery of misaligned dimensions and omitted safety features required by local regulations confirms that the cursory review was substantively inadequate. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. Competence here encompasses not only domain knowledge in groundwater infrastructure design, but also sufficient understanding of the AI tool's outputs to certify their reliability. Engineer A possessed the former but demonstrably lacked the latter. The seal, in this context, was affixed in violation of II.2.b, and the tension between these two principles is resolved against Engineer A: responsible charge cannot be satisfied by reviewing outputs from a tool whose behavior the reviewing engineer does not adequately understand.
The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption was resolved against Engineer A in the design document context, but the resolution reveals a deeper principle hierarchy: when an engineer applies a professional seal, the seal does not merely certify that the engineer reviewed the output - it certifies that the engineer exercised personal, informed judgment over the generative process itself. Because Engineer A had no prior experience with the AI drafting tool and did not understand its full functionality, a cursory high-level review was structurally incapable of satisfying responsible charge, regardless of how much time was spent. The case teaches that the standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool: the less the engineer understands how the tool produces its output, the more rigorous the independent verification must be. Deploying an unfamiliar AI tool is not ethically equivalent to deploying familiar software; it introduces an epistemic gap that only deeper review - not a high-level scan - can close. Public Welfare Paramount ultimately overrides both efficiency and tool novelty as a justification for reduced oversight, particularly where safety-critical omissions in design documents could reach construction.
Does the principle of Client Data Confidentiality in AI Tool Use conflict with the principle of Mentorship Continuity and Succession Planning when an engineer, deprived of a trusted peer reviewer, turns to an open-source AI platform as a substitute quality assurance mechanism, thereby necessarily exposing confidential client data to a third-party system in order to compensate for the loss of professional oversight?
The tension between Client Data Confidentiality in AI Tool Use and Mentorship Continuity and Succession Planning exposes a systemic vulnerability that the Board's conclusions do not address: Engineer A's loss of Engineer B's peer review created professional pressure to substitute AI assistance for human oversight, but the only available AI tool was open-source, meaning that satisfying the need for quality assurance necessarily required exposing Client W's confidential site data and groundwater monitoring information to a public platform without prior consent. This creates a structural conflict in which the engineer cannot simultaneously honor the confidentiality obligation and use the available compensating mechanism. The case teaches that this conflict is not resolvable by choosing one principle over the other after the fact - it is resolvable only by proactive planning before the engagement begins. The principle of Mentorship Continuity and Succession Planning, read alongside the confidentiality obligation under Code provision II.1.c, implies that when a primary quality assurance mechanism is lost, the engineer's first obligation is to identify a compliant replacement - whether a qualified peer reviewer, a privacy-compliant AI platform, or a scope limitation - before accepting work that cannot be competently and confidentially performed alone. Engineer A's failure to engage in that prior planning rendered the confidentiality breach not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice.
Does the principle of Public Welfare Paramount conflict with the principle of AI Tool Transparency and Disclosure Applied to Client W Relationship when the Board concludes there is no universal ethical obligation to disclose AI use, yet the case demonstrates that undisclosed AI-generated design documents containing safety-critical omissions were submitted to a client and could have reached construction without correction had Client W not independently identified the defects?
In response to Q204: The Board's conclusion that there is no universal ethical obligation to disclose AI use is placed under significant strain by the facts of this case. The principle that public welfare is paramount - Code provision I.1 - is not merely aspirational; it functions as a constraint on every other professional decision an engineer makes. In this case, AI-generated design documents containing omitted safety features required by local regulations were submitted to Client W under Engineer A's professional seal. Had Client W not independently identified these deficiencies, the documents could have proceeded toward construction in a non-compliant and potentially dangerous state. The Board's general conclusion about disclosure is grounded in an analogy to other software tools used in engineering practice - an analogy that may hold when the tool is well-understood, widely validated, and used within established professional norms. It does not hold with equal force when the tool is newly released, unfamiliar to the practitioner, and demonstrably capable of generating safety-critical omissions that a cursory review failed to catch. In such circumstances, the public welfare principle does not merely permit disclosure - it may affirmatively require it, because disclosure enables the client and downstream reviewers to apply appropriate scrutiny to outputs whose reliability has not been professionally validated. The Board's conclusion on disclosure should therefore be understood as conditional: it applies when AI tools are used competently and their outputs are rigorously verified, not when they are deployed as substitutes for professional judgment with only superficial review.
The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption was resolved against Engineer A in the design document context, but the resolution reveals a deeper principle hierarchy: when an engineer applies a professional seal, the seal does not merely certify that the engineer reviewed the output - it certifies that the engineer exercised personal, informed judgment over the generative process itself. Because Engineer A had no prior experience with the AI drafting tool and did not understand its full functionality, a cursory high-level review was structurally incapable of satisfying responsible charge, regardless of how much time was spent. The case teaches that the standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool: the less the engineer understands how the tool produces its output, the more rigorous the independent verification must be. Deploying an unfamiliar AI tool is not ethically equivalent to deploying familiar software; it introduces an epistemic gap that only deeper review - not a high-level scan - can close. Public Welfare Paramount ultimately overrides both efficiency and tool novelty as a justification for reduced oversight, particularly where safety-critical omissions in design documents could reach construction.
From a deontological perspective, did Engineer A fulfill their duty of candor toward Client W by submitting AI-generated work products without disclosure, regardless of whether the final outputs were accurate?
In response to Q301: From a deontological perspective, Engineer A did not fulfill their duty of candor toward Client W. Kantian deontological ethics evaluates the moral worth of an action by reference to the maxim underlying it and whether that maxim could be universalized without contradiction. The maxim implicit in Engineer A's conduct - that an engineer may submit AI-generated work products under their professional seal without disclosing the AI's role, provided the outputs are verified for accuracy - cannot be universalized without undermining the foundational trust relationship between licensed professionals and their clients. If all engineers adopted this maxim, the professional seal would cease to function as a reliable signal of personal authorship and responsible charge, and clients would be systematically deprived of information material to their assessment of the work product's provenance and reliability. Furthermore, the duty of candor is not contingent on outcome: it is not satisfied by the fact that the report was accurate or that the design errors were caught. Deontological ethics holds that the duty to be honest with those who rely on one's professional representations exists independently of whether the deception caused harm. Engineer A's silence about AI's role - particularly in the face of Client W's direct observation about the report's stylistic inconsistency - constitutes a breach of the duty of candor that is not remediated by the quality of the final work product.
From a deontological perspective, did Engineer A breach their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed?
In response to Q302: From a deontological perspective, Engineer A breached their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed. Responsible Charge is not a procedural formality - it is a substantive professional and ethical duty that requires the engineer to have directed the work, to understand its content, and to be able to certify its technical adequacy. The professional seal is the outward expression of that duty, and affixing it to documents that have not been adequately reviewed is a categorical violation regardless of intent or outcome. From a deontological standpoint, the duty is breached at the moment of sealing, not at the moment of harm. The fact that Client W identified the errors before construction does not retroactively satisfy the Responsible Charge obligation; it merely prevented the consequences from being worse. Code provision II.2.b makes this categorical character explicit: engineers shall not affix their signatures to plans dealing with subject matter in which they lack competence. Engineer A's unfamiliarity with the AI drafting tool's outputs, combined with a cursory review that failed to detect regulatory non-compliance, establishes that the competence threshold was not met at the time of sealing. The deontological analysis therefore yields a clear conclusion: the duty was breached, independently of any consequentialist assessment of harm.
From a virtue ethics perspective, did Engineer A demonstrate the professional integrity and intellectual honesty expected of a licensed engineer by personalizing AI-generated report text with only minor wording adjustments and presenting it under their professional seal without attribution, even if the content was factually verified?
In response to Q304: From a virtue ethics perspective, Engineer A did not demonstrate the professional integrity and intellectual honesty expected of a licensed engineer in the report authorship process. Virtue ethics evaluates conduct by reference to the character traits and dispositions that a person of practical wisdom - a phronimos - would exhibit in the relevant professional role. A licensed engineer of good character, confronted with a recognized weakness in technical writing and the loss of their primary quality assurance resource, would seek transparent solutions: engaging a peer reviewer, disclosing limitations to the client, or explicitly attributing AI assistance in the work product. Engineer A instead chose a path that preserved the appearance of unassisted professional authorship while relying substantially on AI-generated prose. The minor wording adjustments made to personalize the content do not constitute the kind of intellectual engagement that transforms another's expression into one's own. A person of practical wisdom would recognize that submitting AI-generated text under a professional seal - without attribution, and in the face of a client's direct observation about stylistic inconsistency - is not merely a procedural omission but a failure of intellectual honesty. The virtue of integrity requires consistency between one's professional representations and the actual nature of one's work. Engineer A's conduct fell short of that standard, regardless of the report's factual accuracy.
From a consequentialist perspective, did the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features - outweigh any efficiency benefits gained from using AI-assisted drafting tools, and does this outcome retroactively render the decision to use those tools unethical?
In response to Q303: From a consequentialist perspective, the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features required by local regulations - does outweigh the efficiency benefits gained from using AI-assisted drafting tools in the design phase, and this outcome is ethically significant even though the errors were caught before construction. Consequentialist analysis evaluates actions by their expected outcomes, including foreseeable risks. A competent engineer deploying a novel, untested AI drafting tool for safety-critical infrastructure design, with no prior experience and only a cursory review process, creates a foreseeable probability of undetected errors reaching construction. The actual outcome - regulatory non-compliance and safety omissions - was not an improbable accident; it was a predictable consequence of an inadequate verification process applied to an unreliable generative tool. The efficiency gain from AI-assisted drafting is real but modest relative to the risk: the time saved in initial document generation was offset by the need for revision, the erosion of client trust, and the potential - had Client W not been diligent - for construction of non-compliant infrastructure. Consequentialist ethics does not require that harm actually occur to render a decision unethical; it requires that the expected value of the action, accounting for foreseeable risks, be negative. Here, the expected value of deploying an unfamiliar AI tool with cursory review for safety-critical design work was negative at the time of the decision, and the actual outcome confirms that assessment.
From a virtue ethics perspective, did Engineer A exhibit the prudence and professional humility expected of a competent engineer by choosing to deploy a novel, unfamiliar AI drafting tool - with no prior experience - as a substitute for the mentorship and peer review previously provided by Engineer B, rather than seeking alternative qualified oversight?
In response to Q305: From a virtue ethics perspective, Engineer A did not exhibit the prudence and professional humility expected of a competent engineer in choosing to deploy a novel, unfamiliar AI drafting tool as a substitute for the mentorship and peer review previously provided by Engineer B. Prudence - the virtue of practical wisdom applied to professional decision-making - requires an engineer to accurately assess their own capabilities and limitations, to recognize the boundaries of their competence, and to seek appropriate resources when those boundaries are approached. Engineer A's self-acknowledged weakness in technical writing, combined with the loss of Engineer B's quality assurance function, created a situation that called for heightened caution and deliberate compensatory measures. Instead, Engineer A responded by introducing a second source of uncertainty: an AI tool that was new to the market, open-source, and entirely unfamiliar to Engineer A. Professional humility would have led Engineer A to recognize that substituting one unknown - AI-generated output - for a known quality assurance resource - Engineer B's expert review - does not reduce professional risk; it compounds it. A prudent engineer in Engineer A's position would have sought an alternative qualified peer reviewer, disclosed the limitation to Client W, or scoped the engagement to match their verified capabilities. The choice to proceed without these safeguards reflects not merely a procedural lapse but a deficit in the practical wisdom that the engineering profession requires of its licensed practitioners.
From a consequentialist perspective, did Engineer A's decision to input Client W's confidential site data into open-source AI software - without obtaining prior consent - create a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and should that risk calculus have been apparent to a competent engineer before acting?
In response to Q306: From a consequentialist perspective, Engineer A's decision to input Client W's confidential site data into open-source AI software without prior consent created a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and that risk calculus should have been apparent to a competent engineer before acting. Open-source AI platforms are, by their nature, systems whose data handling, retention, training data incorporation, and third-party access policies are not under the control of the user. A competent engineer - particularly one engaged in environmental consulting involving site-specific groundwater data that may have regulatory, litigation, or competitive sensitivity - bears a professional obligation to investigate how any third-party system will handle client data before transmitting it. The efficiency benefit of AI-assisted drafting is real but bounded: it accelerates initial document generation. The risk created by uploading confidential client data to an unvetted public platform is potentially unbounded: it includes regulatory exposure, competitive harm, litigation risk, and reputational damage to Client W. A consequentialist analysis that assigns even a modest probability to these harms - and a competent engineer should have assigned a non-trivial probability - yields a negative expected value for the decision to use open-source AI without consent. The fact that Engineer A was unfamiliar with the AI software's full functionality, including its data handling practices, does not mitigate this conclusion; it reinforces it, because proceeding under conditions of ignorance about foreseeable risks is itself a consequentialist failure.
If Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, would Engineer A have been obligated to decline the use of AI tools entirely or to seek a privacy-compliant alternative, and how would that have affected the deliverables?
In response to Q401: If Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, Engineer A would have faced a clear ethical fork: either decline the use of open-source AI tools entirely, or identify a privacy-compliant alternative - such as an enterprise AI system with contractual data protection guarantees, or a locally deployed model with no external data transmission. The obligation to decline would not have been absolute; it would have been an obligation to find a compliant solution or to proceed without AI assistance. This counterfactual illuminates a structural point: the ethical failure was not the decision to use AI per se, but the decision to use a specific category of AI tool - open-source, publicly accessible - without first obtaining client consent for the data exposure that use necessarily entailed. Had Engineer A followed the disclosure-and-consent pathway, the subsequent work product might have been produced differently - perhaps with a privacy-compliant AI tool, perhaps without AI assistance at all - but the client relationship and the engineer's ethical standing would have been preserved. The counterfactual also suggests that the Board's conclusion that AI use is not unethical per se should be understood as conditional on the use of appropriate tools under appropriate consent frameworks, not as a blanket endorsement of any AI tool for any purpose.
If Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent to the thorough review applied to the report - rather than a cursory high-level check, would the safety omissions and dimensional errors have been caught before submission to Client W, and would that level of review have been sufficient to satisfy the Responsible Charge standard?
In response to Q402: If Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent in thoroughness to the review applied to the report - the safety omissions and dimensional errors would very likely have been identified before submission to Client W, and such a review would have been substantially more likely to satisfy the Responsible Charge standard. The case establishes a clear asymmetry: Engineer A's thorough review of the report was sufficient to catch factual inaccuracies and verify content quality, while the cursory review of the design documents was not sufficient to detect regulatory non-compliance and dimensional errors. This asymmetry suggests that the review process, not the use of AI per se, was the determinative variable in the design document failure. A rigorous review - one that checked each dimension against site survey data, verified each specification against local regulatory requirements, and confirmed the presence of all required safety features - would have functioned as an adequate Responsible Charge mechanism even for AI-generated outputs, provided the reviewing engineer possessed the domain competence to evaluate what they were reviewing. Engineer A possessed that domain competence in groundwater infrastructure design. The ethical failure was therefore not the use of AI drafting tools, but the decision to apply a cursory rather than rigorous review standard to safety-critical outputs from an untested tool. This counterfactual reinforces the Board's conclusion that AI-assisted drafting is not unethical per se, while clarifying that the adequacy of the review process is the critical ethical variable.
If Engineer A had explicitly cited the use of AI software in the report - including identifying which sections were AI-generated and which were independently authored - would Client W's observation that the report 'read as if written by two different authors' have raised or resolved concerns about the reliability and professional accountability of the work product?
In response to Q404: If Engineer A had explicitly cited the use of AI software in the report - identifying which sections were AI-generated and which were independently authored - Client W's observation that the report read as if written by two different authors would have been resolved rather than raised as a concern. The stylistic inconsistency that Client W detected was, in fact, an accurate artifact of the report's dual-origin nature: AI-generated prose tends to exhibit a characteristic uniformity and polish that differs from the more variable style of human technical writing, particularly from an engineer who self-identifies as less confident in technical writing. Explicit attribution would have provided Client W with a framework for understanding and contextualizing that observation, transforming a source of unease into a transparent feature of the work product. However, explicit attribution would also have raised a different set of questions: it would have invited Client W to scrutinize the AI-generated sections more carefully, to inquire about the AI tool's data handling practices, and potentially to raise concerns about the confidential data exposure that had already occurred. In this sense, disclosure would have been simultaneously clarifying and consequential - it would have resolved the authorship ambiguity while potentially surfacing the deeper confidentiality violation. This counterfactual suggests that the ethical case for disclosure is stronger than the Board's agnostic conclusion implies: transparency about AI use not only serves intellectual honesty but also enables clients to exercise informed oversight of work products that may have been generated under conditions they would not have approved.
If Engineer B had not retired and had continued to provide quality assurance review of Engineer A's work products, would Engineer A have been less likely to over-rely on AI tools, and does the absence of mentorship create a systemic professional vulnerability that the NSPE Code of Ethics should address through explicit guidance on peer review succession planning?
The tension between Client Data Confidentiality in AI Tool Use and Mentorship Continuity and Succession Planning exposes a systemic vulnerability that the Board's conclusions do not address: Engineer A's loss of Engineer B's peer review created professional pressure to substitute AI assistance for human oversight, but the only available AI tool was open-source, meaning that satisfying the need for quality assurance necessarily required exposing Client W's confidential site data and groundwater monitoring information to a public platform without prior consent. This creates a structural conflict in which the engineer cannot simultaneously honor the confidentiality obligation and use the available compensating mechanism. The case teaches that this conflict is not resolvable by choosing one principle over the other after the fact - it is resolvable only by proactive planning before the engagement begins. The principle of Mentorship Continuity and Succession Planning, read alongside the confidentiality obligation under Code provision II.1.c, implies that when a primary quality assurance mechanism is lost, the engineer's first obligation is to identify a compliant replacement - whether a qualified peer reviewer, a privacy-compliant AI platform, or a scope limitation - before accepting work that cannot be competently and confidentially performed alone. Engineer A's failure to engage in that prior planning rendered the confidentiality breach not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice.
Decisions & Arguments
View ExtractionCausal-Normative Links 6
- Responsible Charge Active Review Obligation Partially Met By Engineer A Over Environmental Report
- AI-Generated Work Product Competence Verification Obligation Partially Met By Engineer A In Report Review
- Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
- AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
- Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
- Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
- AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents
- Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement
- Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A
- Client Consent for Third-Party Data Sharing Obligation
- AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
- AI Tool Disclosure Obligation
- Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Report
- Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission
- AI Tool Attribution Citation Obligation Violated By Engineer A In Environmental Report
- Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents
- AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents
- AI-Generated Work Product Competence Verification Obligation Breached By Engineer A In Design Document Review
- AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
- Regulatory Compliance Verification Obligation Breached By Engineer A In Design Document Submission
- Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
- Engineering Judgment Non-Substitution Obligation Violated By Engineer A In AI Design Reliance
- Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
- Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents
- Fact-Grounded Technical Opinion Obligation Partially Met By Engineer A In Environmental Report
- Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool
- Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement
Decision Points 18
When Client W directly observed that the environmental report appeared to have been written by two different authors, should Engineer A proactively disclose the AI's generative role, or treat the AI as an internal productivity tool and disclose only if directly asked?
Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception can arise from deliberate silence where a reasonable client would expect disclosure and where omission sustains a materially false impression. Code provision III.9 requires engineers to give credit for engineering work to those to whom credit is due, extending to the intellectual and evidentiary sources that substantiate technical conclusions. The Intellectual Authorship Integrity Obligation requires that a professional seal represent not merely quality certification but intellectual ownership and responsible charge over the work's expression. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify, and silence at that moment transformed a prior omission into an active, ongoing misrepresentation. The Responsible Charge Active Review Obligation was partially met by Engineer A's thorough factual verification, but that verification does not discharge the separate authorship attribution obligation.
Uncertainty arises because no settled NSPE Code provision at the time of the engagement explicitly mandated AI tool disclosure, and the Board concluded there is no universal freestanding obligation to disclose AI use analogous to disclosing other engineering software. A rebuttal condition exists that Engineer A's thorough review may have sufficiently transformed the AI draft into Engineer A's own professionally accountable work product, such that the seal certifies technical accuracy and responsible charge rather than personal prose authorship. Additionally, the duty of candor may not extend to every tool or method used in professional practice: engineers are not obligated to disclose use of spell-checkers, grammar tools, or reference databases, leaving open whether AI drafting tools occupy a categorically different position absent explicit Code guidance.
Engineer A used open-source AI software to generate the initial draft of the environmental groundwater monitoring report, made only minor wording adjustments to personalize the content, conducted a thorough factual review cross-checking AI-generated claims against professional journal articles, and submitted the report to Client W under a professional seal without disclosing AI involvement. Client W independently observed that the report appeared to have been written by two different authors, a stylistically accurate description of the report's dual-origin nature, and Engineer A did not respond by acknowledging the AI's generative role. The report did not cite the journal articles used for cross-checking, nor did it attribute any content to AI generation.
Should Engineer A conduct a rigorous, line-by-line technical review of the AI-generated design documents before sealing them, or is a standard QA protocol sufficient, and if neither is adequate alone, should Engineer A bring in an independent peer reviewer?
Code provision I.1 places public safety, health, and welfare as the paramount obligation of a licensed engineer, and this obligation is not aspirational, it is the foundational constraint against which all professional judgments must be measured. Code provisions I.2 and II.2.a require engineers to perform services only within areas of their competence, and this obligation extends to the tools they deploy: competence encompasses not only domain knowledge but also sufficient understanding of the AI tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. The professional seal certifies that the engineer has exercised responsible charge, understood, directed, and can stand behind the work's technical adequacy. A cursory review of output from a novel tool whose generative logic the engineer does not fully understand cannot satisfy that standard. The Engineering Judgment Non-Substitution Obligation requires that AI tools supplement rather than substitute for independent professional engineering judgment. The Mentorship Succession and Peer Review Continuity Obligation required Engineer A to arrange alternative peer review when Engineer B retired, rather than substituting an unvalidated AI tool for that professional oversight.
Uncertainty arises because responsible charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were conducted, the engineer's unfamiliarity with the tool's internal logic might not independently defeat responsible charge. Additionally, no settled professional standard at the time of the engagement explicitly defined what constitutes 'sufficient' review of AI-generated design documents, leaving open whether a high-level review by an engineer with strong domain expertise in groundwater infrastructure could satisfy the standard for outputs that fall within that domain. A further rebuttal condition is that the safety omissions and dimensional errors might have been of a type detectable through standard engineering review protocols regardless of the generative tool used, meaning the tool's novelty may not have been the operative variable, the review depth was.
Engineer A used a newly released AI-assisted drafting tool: with no prior experience and without fully understanding its capabilities, limitations, or failure modes, to generate preliminary engineering design documents for groundwater infrastructure modifications. Engineer A conducted only a cursory, high-level review of the AI-generated documents before applying a professional seal and submitting them to Client W. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations. These defects were identified by Client W, not by Engineer A's review. Engineer B, who had previously provided quality assurance review of Engineer A's work, had retired and was no longer available. The AI drafting tool was new to the market, and Engineer A had no prior experience with it.
Should Engineer A obtain Client W's prior informed consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed using technical safeguards or platform substitution without seeking consent?
Code provision II.1.c requires engineers to treat information obtained in the course of a professional engagement as confidential and not to disclose it without the client's consent. Uploading confidential client data to an open-source AI platform is tantamount to placing that information in the public domain, because the engineer cannot control how the platform processes, retains, or transmits the data. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. A competent engineer deploying any novel third-party software tool, particularly a newly released, open-source platform with unknown data handling practices, bears an affirmative pre-use obligation to investigate whether inputting confidential client data is permissible under the client relationship and to obtain explicit client consent if any confidential information will be transmitted to a third-party system. This violation is not remediated by the thoroughness of the subsequent report review, the accuracy of the final work product, or any disclosure or non-disclosure decision regarding AI authorship. The confidentiality breach stands as a discrete, self-contained ethical violation.
Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third-party recipient in the conventional sense. A further rebuttal condition exists: if the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk of harm insufficient to constitute a breach. Additionally, if Engineer A had obtained Client W's informed consent to use the AI platform, which did not occur here but represents a compliant pathway, the confidentiality obligation would have been satisfied, suggesting the violation is procedural rather than categorical.
Engineer A gathered Client W's confidential site data and groundwater monitoring information and uploaded it into an open-source AI platform to synthesize the information for the environmental report. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, storage, and privacy policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets. Engineer A did not obtain Client W's prior consent before uploading the confidential data, and did not investigate the platform's data handling practices before use. The confidential data included site-specific environmental information that may have regulatory, litigation, or competitive sensitivity.
Should Engineer A proactively disclose the AI tool's generative role to Client W, including which sections it drafted, or treat the AI as an internal drafting tool requiring no special disclosure?
Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Code provision III.9 requires attribution of intellectual and evidentiary sources. The professional seal implicitly represents personal authorship and responsible charge over the work's expression, not merely its factual accuracy. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify. Competing against these is the Board's conclusion that AI tools are analogous to other engineering software (CAD, FEA) and that no universal disclosure obligation exists absent contractual requirement or affirmative misrepresentation, and that Engineer A's thorough review satisfied the competence dimension of responsible charge.
Uncertainty arises because no settled NSPE Code provision at the time of the engagement explicitly mandated AI tool disclosure, and the analogy to conventional software has genuine force: engineers do not routinely disclose every drafting or analysis tool used. The rebuttal condition, whether Engineer A's thorough review sufficiently transforms AI-generated prose into Engineer A's own professional work product, is unresolved by existing professional standards. Additionally, if the duty of candor does not extend to disclosure of every tool or method, silence about AI use may not constitute deception per se. However, the specific moment of Client W's authorship observation distinguishes this case from routine non-disclosure: silence at that moment allowed a materially false impression to persist.
Engineer A used an open-source AI tool to draft the environmental report, then made minor wording adjustments and applied their professional seal without disclosing AI involvement. Engineer A conducted a thorough factual review, cross-checking AI-generated content against professional journal articles. Client W independently observed that the report appeared to have been written by two different authors, an accurate description of its dual-origin nature. Engineer A did not respond to this observation by acknowledging the AI's role.
Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's prior written consent before uploading confidential site data, or may Engineer A proceed using anonymization or treat the platform as equivalent to local software?
Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on downstream work product quality or accuracy. Open-source AI platforms process user-submitted data in ways that may expose it to third parties, retain it, or incorporate it into training datasets, consequences outside Engineer A's control. A competent engineer deploying any novel third-party tool with client data bears a pre-use obligation to investigate data handling policies and obtain explicit client consent before transmitting confidential information. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. Competing against this is the argument that uploading data to an AI platform may not constitute 'disclosure to a third party' within the meaning of II.1.c if the data was processed algorithmically without human access, and that the efficiency benefit of AI-assisted drafting is a legitimate professional interest.
Uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally rather than shared with an identifiable person. If the platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the risk negligible. Additionally, if Engineer A reasonably but incorrectly believed the platform operated with the same data isolation as locally installed software, the breach might be characterized as a competence failure rather than a deliberate confidentiality violation, though this does not eliminate the ethical breach.
Engineer A uploaded Client W's proprietary site characterization data and groundwater monitoring information into an open-source AI platform without first obtaining Client W's consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, retention, and potential training data incorporation practices. The data was site-specific, potentially sensitive for regulatory, litigation, or competitive purposes. No contractual provision authorized transmission of client data to third-party systems.
After losing Engineer B's peer review function, should Engineer A perform a rigorous independent technical review of all AI-generated documents before sealing them, apply the existing QA protocol treating the AI tool as equivalent to conventional drafting software, or engage a third-party AI-experienced reviewer to fill the oversight gap?
Code provisions I.2 and II.2.a require engineers to perform services only within areas of their competence, and this obligation extends to the tools deployed: competence encompasses sufficient understanding of a tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. The professional seal certifies responsible charge: that the engineer has directed the work, understood its content, and can stand behind its technical adequacy. A cursory review of output from a novel tool whose generative logic the engineer does not understand cannot satisfy this standard. Code provision I.1 places public safety as the paramount obligation, and sealing documents with regulatory safety omissions after only cursory review directly implicates this obligation. The loss of Engineer B's peer review created an affirmative obligation to arrange a functionally equivalent alternative, not to substitute an untested AI tool for professional oversight.
Uncertainty is created by the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public safety provisions. Responsible charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous review could theoretically have caught the errors, the question becomes whether the review actually performed was adequate, not whether the engineer understood the AI's generative logic. Additionally, if the AI tool were sufficiently mature and well-documented, and its outputs independently verifiable by Engineer A's existing domain expertise in groundwater infrastructure, the novelty of the tool alone might not establish incompetence. The rebuttal condition, whether a more rigorous review would have caught the defects, is addressed by the counterfactual analysis suggesting it would have.
Engineer B retired, removing the quality assurance and peer review function Engineer A had structurally depended upon. Engineer A then used a newly released, open-source AI drafting tool, with no prior experience, to generate engineering design documents for a dual-scope engagement. Engineer A conducted only a cursory, high-level review of the AI-generated design documents before affixing their professional seal. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations: defects that Client W, not Engineer A, identified. Had Client W not conducted an independent review, the deficient documents could have proceeded to construction.
What standard of review must Engineer A apply to AI-generated design documents before affixing a professional seal, given unfamiliarity with the AI drafting tool and the safety-critical nature of the outputs?
The professional seal legally and ethically certifies responsible charge, that the engineer has directed the work, understands its content, and can stand behind its technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter. Public welfare is paramount (I.1), and safety-critical omissions in design documents that could reach construction represent a failure of the core public protection function of licensure. The standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool.
Uncertainty arises because responsible charge doctrine has historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, some argue that tool familiarity is not independently required. Additionally, no settled professional standard at the time of the engagement explicitly specified what review depth is required for AI-generated design documents, leaving open whether a high-level review by a domain-competent engineer could satisfy the standard for lower-complexity elements.
Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review before affixing a professional seal. Client W subsequently discovered misaligned dimensions and omitted safety features required by local regulations, defects that Engineer A's review failed to catch.
When Client W observed that the report appeared written by two different authors, should Engineer A disclose that AI software drafted the more polished sections, or respond in a way that affirms professional responsibility without identifying the AI's specific role?
Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients. Deception does not require an affirmative false statement, deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission sustains a materially false impression constitutes a deceptive act. Client W's direct observation about stylistic inconsistency created a discrete, time-specific obligation to clarify: a client who is told their report reads as if written by two people is, in practical terms, asking why. The professional seal implicitly represents intellectual authorship and responsible charge over the work's expression, not merely quality certification. Code provision III.9's credit-giving obligation extends to the intellectual and evidentiary origins of professional work product.
Uncertainty is created by the board's own conclusion that there is no universal ethical obligation to disclose AI tool use, analogizing AI to other engineering software. The duty of candor may not extend to disclosure of every tool or method used in professional practice, engineers are not obligated to disclose which CAD software they use. Additionally, if Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professional work product, the authorship representation may be defensible. The rebuttal condition, whether review thoroughness converts AI-generated text into engineer-authored work, remains professionally unsettled.
Engineer A used AI software to draft the environmental report, then personalized the AI-generated prose with minor wording adjustments and submitted the report under a professional seal without disclosing the AI's role. Client W observed that the report read as if written by two different authors, an observation that was factually accurate given the report's dual-origin nature. Engineer A did not respond to this observation by disclosing the AI's generative contribution. The report's factual content had been thoroughly verified by Engineer A against professional journal articles, though those sources were not cited.
Before uploading Client W's confidential site data to an open-source AI platform, should Engineer A investigate the platform's data handling practices and obtain Client W's explicit consent, proceed under the existing engagement agreement, or use only anonymized data in the AI tool?
Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on the accuracy or quality of the resulting work product. A competent engineer deploying any novel third-party platform with client data bears a pre-use obligation to investigate data handling and privacy policies and to obtain explicit client consent if confidential information will be transmitted to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether misuse occurs. The confidentiality obligation is not remediated by the thoroughness of subsequent review, the accuracy of the final work product, or any disclosure decision regarding AI authorship. This violation stands as a separate and self-contained ethical breach.
Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure to a third party' within the meaning of II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable human third party. Additionally, if the open-source platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk insufficient to constitute a breach. The confidentiality obligation might also be partially rebutted if Engineer A had obtained Client W's informed consent, or if the engagement contract authorized use of third-party software tools without specifying consent requirements.
Engineer A uploaded Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining Client W's prior consent. Engineer A was self-admittedly unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets. Engineer B's retirement had removed Engineer A's primary quality assurance mechanism, creating professional pressure to use AI assistance for a complex dual-scope engagement.
Should Engineer A conduct a rigorous line-by-line technical review of all AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventional drafting tools, or engage a qualified peer reviewer to verify safety-critical elements?
The professional seal legally and ethically certifies that the engineer has exercised responsible charge, that they understood, directed, and can stand behind the work's technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter: an engineer using a novel AI tool whose generative logic they do not fully understand must apply verification rigor proportionate to that epistemic gap. The public safety paramount obligation (I.1) functions as a non-negotiable constraint: safety-critical omissions that could reach construction represent a failure of the core public protection function of licensure. Competing against these is the argument that responsible charge doctrine has historically focused on review outcomes rather than process comprehension, if outputs are technically adequate, the review method may be immaterial.
Uncertainty arises from the absence of a defined professional standard specifying what constitutes 'sufficient' review of AI-generated design documents. A rebuttal condition holds that if the safety omissions and dimensional errors were of a type detectable through standard domain-competent review, which Engineer A possessed in groundwater infrastructure, then the failure was one of review thoroughness rather than tool incompetence, and a more rigorous application of standard review protocols might have satisfied responsible charge without requiring specialized AI expertise. Additionally, the analogy to conventional CAD software creates uncertainty: if AI drafting tools are treated as instrumentally equivalent to other design software, the review standard applicable to CAD outputs might be argued to apply equally here.
Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review of those documents before affixing their professional seal. Client W subsequently identified misaligned dimensions and omitted safety features required by local regulations: defects that Engineer A's review had not caught. Engineer B, who had previously provided quality assurance review, had retired before the engagement began.
Should Engineer A proactively disclose the AI tool's generative role in response to Client W's authorship observation, or address the concern through explanation or revision without specifically disclosing AI involvement?
Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients: deception does not require an affirmative false statement but can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Client W's direct observation about stylistic inconsistency constituted an implicit inquiry about authorship that created a discrete, time-specific obligation to clarify. Code provision III.9's credit-giving obligation extends to the intellectual and evidentiary sources substantiating technical conclusions, including AI-generated prose and uncited journal articles used for verification. Competing against these is the board's general conclusion that no universal disclosure obligation exists absent contractual requirement, and that the professional seal and responsible charge, not authorship attribution, are the operative accountability mechanisms in engineering.
Uncertainty is generated by the absence of an explicit NSPE Code provision mandating AI tool disclosure at the time of the engagement, leaving the obligation to be derived inferentially from general candor and non-deception provisions. A rebuttal condition holds that the duty of candor may not extend to disclosure of every tool or method used in professional practice: engineers are not obligated to disclose use of word processors, spreadsheet software, or other drafting aids, and if Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professionally verified work product, the authorship representation implicit in the seal may be defensible. Additionally, the virtue ethics rebuttal notes that engineers routinely rely on drafting assistance without attribution, and the novelty of AI as a drafting tool may not yet carry settled professional norms distinguishing it from other forms of professional assistance.
Engineer A used open-source AI software to draft the environmental report for Client W, then conducted a thorough factual review, cross-checking AI-generated content against professional journal articles, before submitting the report under their professional seal without any disclosure of AI involvement. Client W observed that the report read as if written by two different authors, a stylistically accurate description of the report's dual-origin nature. Engineer A did not respond to this observation by disclosing the AI's role. The report contained no citations to the journal articles used for cross-checking and no attribution of AI-generated sections.
After Engineer B's retirement removed Engineer A's primary quality assurance mechanism, did Engineer A have an independent ethical obligation to arrange a functionally equivalent alternative peer review process before undertaking a complex dual-scope engagement, and did the decision to substitute an open-source AI tool for that oversight independently violate the client data confidentiality obligation by necessarily exposing Client W's proprietary site data to a public platform without prior consent?
Code provisions I.2 and II.2.a require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. When an established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before accepting complex, high-stakes work. AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility. Separately and independently, Code provision II.1.c imposes an absolute confidentiality obligation: uploading Client W's proprietary site data to an open-source platform without prior consent exposed that information to potential third-party access, retention, or reuse that Engineer A could not control, a breach that stands entirely apart from questions of report quality or AI disclosure. The structural conflict between these two obligations, the need for quality assurance and the confidentiality constraint on the only available compensating mechanism, was resolvable only through proactive planning before the engagement began.
Uncertainty arises from the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the succession obligation to be derived from general competence and public welfare provisions. A rebuttal condition holds that if Engineer A's own domain expertise was sufficient to independently verify the work product, and Engineer A did possess genuine competence in groundwater infrastructure and environmental assessment, the absence of a peer reviewer might not independently constitute an ethical violation, provided the engineer's own review was sufficiently rigorous. On the confidentiality question, uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of II.1.c, since the data was used instrumentally rather than shared with an identifiable recipient, and if the platform's data handling practices were such that uploaded data was provably isolated and not retained, a consequentialist analysis might not find foreseeable harm.
Engineer B, who had served as Engineer A's primary mentor and quality assurance reviewer, retired before the Client W engagement began. Engineer A then accepted a complex dual-scope engagement, a comprehensive environmental contaminant characterization report and engineering design documents for infrastructure modifications, without arranging alternative peer review. Engineer A chose to use a newly released, open-source AI tool with no prior experience, uploading Client W's confidential site data and groundwater monitoring information into the public platform without obtaining Client W's prior consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling and retention practices.
Should Engineer A perform a rigorous, element-by-element technical review of AI-generated design documents before sealing them, apply the firm's standard QA protocol as used for conventionally drafted documents, or engage a third-party reviewer with AI-specific experience to verify safety-critical elements?
The professional seal legally and ethically certifies that the engineer has exercised Responsible Charge, that they understood, directed, and can stand behind the work's technical adequacy (II.2.b). The competence obligation (I.2, II.2.a) extends to the tools deployed, not merely the subject matter: an engineer using a novel AI tool whose generative logic they do not fully understand must apply verification rigor proportionate to that epistemic gap. The public safety paramount obligation (I.1) functions as a non-negotiable constraint: safety-critical omissions in design documents that could reach construction represent a failure of the core public protection function of licensure. The Engineering Judgment Non-Substitution obligation holds that AI-generated outputs cannot substitute for the engineer's own professional judgment over safety-critical elements.
Uncertainty arises because Responsible Charge standards have historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, some argue the generative mechanism is irrelevant. Additionally, no settled professional standard at the time of the engagement explicitly defined what depth of review of AI-generated design documents was required to satisfy Responsible Charge, leaving open whether a high-level review by a domain-competent engineer might suffice for lower-risk elements. A further rebuttal holds that the harm was contingent on the cursory review, not inherent to AI tool use, meaning the tool adoption itself was not unethical, only the review depth was.
Engineer A used a newly released, open-source AI drafting tool with no prior experience to generate engineering design documents for Client W. Engineer A then conducted only a cursory, high-level review of those documents before affixing their professional seal and submitting them. Client W subsequently identified misaligned dimensions and omitted safety features required by local regulations: defects that Engineer A's review had not detected. Engineer B, who had previously provided quality assurance review, had retired before this engagement.
After Engineer B's retirement eliminated Engineer A's primary QA resource, should Engineer A arrange a functionally equivalent peer reviewer before proceeding with the Client W engagement, proceed relying on personal domain competence, or disclose the QA gap to Client W and propose a reduced scope?
Code provisions I.2 and II.2.a require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. An engineer who knows they have a recognized weakness in a critical deliverable component, who has lost their primary quality assurance resource, and who then deploys an untested tool as a replacement without independent verification of that tool's reliability has not satisfied the competence standard. AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. The substitution also compounded the ethical problem by requiring upload of confidential client data to an open-source platform. The virtue of prudence requires accurate self-assessment of limitations and deliberate compensatory measures when those limits are approached.
Uncertainty is created by the absence of an explicit NSPE Code provision mandating peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public welfare provisions. If the AI tool were sufficiently mature, well-documented, and its outputs independently verifiable by Engineer A's existing domain expertise, the novelty of the tool alone might not establish an ethical obligation to seek alternative oversight. Additionally, if no qualified peer reviewer was reasonably accessible within the project timeline and budget, the obligation to arrange alternative review would be rebutted by practical impossibility, and the engineer's domain competence might be argued sufficient to satisfy the competence standard independently.
Engineer B had served as Engineer A's primary quality assurance resource, providing peer review and mentorship that was integral to Engineer A's professional practice. When Engineer B retired before the Client W engagement, Engineer A lost that oversight mechanism. Engineer A then accepted a dual-scope engagement, a comprehensive contaminant characterization report and engineering design documents for infrastructure modifications, and chose to deploy a newly released, open-source AI drafting tool with no prior experience as a substitute for that professional oversight. Engineer A self-acknowledged a recognized weakness in technical writing. The resulting design documents contained misaligned dimensions and omitted safety features that Engineer A's cursory review did not detect.
Should Engineer A investigate the open-source AI platform's data handling practices and obtain Client W's explicit consent before uploading confidential site data, or may Engineer A proceed by anonymizing inputs or treating the platform as equivalent to standard third-party engineering software?
Code provision II.1.c imposes an affirmative, independent obligation to protect client confidentiality that is not contingent on the quality or accuracy of the resulting work product. A competent engineer deploying any novel third-party software tool, particularly an open-source platform with unknown data handling practices, bears an affirmative pre-use duty to investigate how that system will handle client data before transmitting it, and to obtain explicit client consent if confidential information will be exposed to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether downstream misuse occurs. This violation stands entirely apart from questions about report quality, AI disclosure, or design document accuracy and is not remediated by the thoroughness of subsequent review. From a consequentialist perspective, the foreseeable risk of harm to Client W's proprietary interests, regulatory exposure, competitive harm, litigation risk, outweighs the drafting efficiency gained, and that risk calculus should have been apparent to a competent engineer before acting.
Uncertainty arises from the question of whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third-party recipient. If the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the risk sufficiently low to be outweighed by the efficiency benefit. Additionally, if Engineer A had obtained Client W's informed consent to use the AI platform, even implicitly through a broad project authorization, the confidentiality breach would be rebutted.
Engineer A uploaded Client W's proprietary site data and groundwater monitoring information, information with potential regulatory, litigation, and competitive sensitivity, into an open-source, publicly accessible AI platform without first obtaining Client W's knowledge or consent. Engineer A was self-admittedly unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating risks of disclosure beyond Engineer A's control.
Given that Engineer B's retirement removed Engineer A's primary quality assurance mechanism and that Engineer A had no prior experience with the AI drafting tool, should Engineer A perform a rigorous line-by-line technical review before sealing, apply the standard QA protocol as-is, or engage an independent peer reviewer to verify safety-critical elements?
Responsible Charge requires the engineer to have directed, understood, and be able to certify the technical adequacy of sealed work (II.2.b). Competence obligations extend to the tools deployed, not merely the subject matter (I.2, II.2.a). The standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool. When an established quality assurance mechanism is lost, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, safety-critical engagements. Public welfare is paramount (I.1) and cannot be subordinated to efficiency gains from novel tool adoption.
Responsible charge doctrine has historically focused on the adequacy of review outcomes rather than process comprehension: if a sufficiently rigorous outcome-based review were performed, unfamiliarity with the tool's internal logic might not independently constitute a breach. Additionally, no explicit NSPE Code provision mandates peer review as a precondition to practice, leaving the obligation to be derived inferentially from general competence and public welfare provisions. A high-level review by a domain-competent engineer might be argued sufficient if the AI tool's outputs were of a type amenable to rapid expert verification.
Engineer B retired, removing the primary peer review mechanism Engineer A had relied upon. Engineer A then accepted a dual-scope engagement and deployed a novel, newly released open-source AI drafting tool with no prior experience. Engineer A conducted only a cursory, high-level review of the AI-generated design documents before affixing a professional seal. The documents were subsequently found to contain misaligned dimensions and omitted safety features required by local regulations, defects not caught by Engineer A's review but identified by Client W independently.
When Client W directly observed that the report appeared to have been written by two different authors, accurately identifying its dual-origin nature, should Engineer A disclose the AI tool's generative role, deflect with a technical explanation, or offer revision without attribution?
Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives clients; deception does not require an affirmative false statement but can arise from deliberate silence where a reasonable client would expect disclosure and where the omission sustains a materially false impression. Client W's direct observation about authorial inconsistency constituted an implicit inquiry about the report's provenance, creating a discrete, time-specific obligation to clarify. Code provision III.9 requires giving credit for engineering work to those to whom credit is due, which extends to the intellectual and evidentiary sources, including AI-generated prose and uncited journal articles, that substantiate technical conclusions. The professional seal implicitly represents intellectual authorship and responsible charge over the work's expression, not merely its factual accuracy.
The duty of candor may not extend to disclosure of every tool or method used in professional practice: engineers are not obligated to disclose use of CAD software, finite element analysis tools, or other drafting aids. If Engineer A's thorough review sufficiently transformed the AI draft into Engineer A's own professional work product, the authorship representation may be defensible. No settled professional standard at the time of the engagement explicitly defined the threshold of review depth required to convert AI-generated text into engineer-authored work. Code provision III.9's credit obligation may apply only when another engineer's or author's work is directly incorporated, not when AI-generated synthesis is independently verified and corrected.
Engineer A used an open-source AI tool to draft the environmental report, then conducted a thorough factual review, cross-checking AI-generated claims against professional journal articles, before sealing and submitting the report without any disclosure of AI involvement. The report exhibited a stylistic inconsistency that Client W independently detected, observing that it appeared written by two different authors. This observation was factually accurate: AI-generated prose tends toward uniform polish that differs from Engineer A's more variable human writing style. Engineer A did not respond to Client W's observation by disclosing the AI's role. The report also omitted citations to the journal articles used for cross-checking.
Should Engineer A obtain Client W's explicit prior consent before uploading confidential site data to the open-source AI platform, or may Engineer A proceed by anonymizing the data or limiting inputs to publicly available information?
Code provision II.1.c imposes an affirmative, non-contingent obligation to protect client confidentiality that precedes and is independent of questions about work product quality or AI disclosure. A competent engineer deploying any novel third-party platform with client data bears an independent obligation to investigate the data handling, storage, and privacy policies of that tool before use, and to obtain explicit client consent if confidential information will be transmitted to a third-party system. The harm of unauthorized exposure is the breach itself, independent of whether actual misuse occurs. The loss of Engineer B's peer review created professional pressure to use AI as a compensating mechanism, but the only available open-source tool necessarily exposed confidential data, creating a structural conflict resolvable only by proactive planning before engagement acceptance.
Uncertainty arises from whether uploading data to an open-source AI platform constitutes 'disclosure' to a third party within the meaning of Code provision II.1.c, since the data was used instrumentally to generate a work product rather than shared with an identifiable third party for their benefit. If the open-source AI platform's data handling practices were such that uploaded data was provably isolated, not retained, and not accessible to third parties, a consequentialist analysis might find the foreseeable risk insufficient to constitute a breach. The confidentiality obligation might also be partially rebutted if Engineer A had obtained Client W's informed consent to use the AI platform, or if the data uploaded was sufficiently anonymized or aggregated to prevent identification.
Engineer A uploaded Client W's confidential site data and groundwater monitoring information: proprietary environmental data with potential regulatory, litigation, and competitive sensitivity, into an open-source AI platform without obtaining Client W's prior consent. Engineer A was unfamiliar with the AI software's full functionality, including its data handling, retention, and third-party access policies. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating foreseeable risks of disclosure beyond Engineer A's control.
Event Timeline
Causal Flow
- Chose AI for Report Drafting Input Confidential Data into Public AI
- Input Confidential Data into Public AI Conducted Thorough Report Review
- Conducted Thorough Report Review Submitted Report Without AI Disclosure
- Submitted Report Without AI Disclosure Used AI for Design Document Generation
- Used AI for Design Document Generation Conducted Cursory Design Document Review
- Conducted Cursory Design Document Review Engineer B Retirement Occurs
Opening Context
View ExtractionYou are Engineer A, a licensed environmental engineering consultant retained by Client W to prepare two deliverables: a comprehensive environmental report on an organic contaminant of concern, and engineering design documents for groundwater infrastructure modifications at the same site. Your mentor and longtime quality-assurance reviewer, Engineer B, has recently retired. Without that support, and facing deadline pressure, you have turned to a newly released open-source AI tool to assist with both deliverables. You have no prior experience with this tool, and the platform requires you to upload project data to generate drafts. Client W has not been informed of any of this. The report draft and the preliminary design documents are now ready. How you review, seal, disclose, and deliver these work products will determine whether you meet your professional obligations or fall short of them.
Characters (8)
A licensed professional engineer retained by Client W to prepare a comprehensive environmental report and develop engineering design documents for groundwater infrastructure modifications. Used AI software tools to assist with drafting deliverables but conducted only cursory review before affixing professional seal, resulting in quality deficiencies identified by the client.
- Likely motivated by efficiency and workload management following the loss of mentorship support, prioritizing timely deliverable submission over rigorous professional review and transparency obligations.
- Likely motivated by overconfidence in AI-generated outputs and time pressure, leading to an underestimation of the verification rigor required before affixing a professional seal to design documents.
- Professional obligation to maintain responsible charge and active engagement in the engineering process from conception to completion.
Developed engineering design documents including plans and specifications for groundwater infrastructure modifications using AI-assisted drafting tools; conducted only cursory review resulting in misaligned dimensions and omission of required safety features
A recently retired senior engineer who previously provided essential supervisory oversight and quality assurance that helped maintain Engineer A's professional standards.
- Motivated by a genuine commitment to professional mentorship during active practice, though retirement inadvertently created a critical accountability gap that Engineer A failed to compensate for through alternative oversight measures.
Retained Engineer A for environmental contaminant reporting and groundwater infrastructure design; reviewed deliverables, identified quality inconsistencies in the report and critical deficiencies in the design documents, and instructed Engineer A to revise plans to meet professional and regulatory standards
Used AI language processing software to draft an environmental groundwater monitoring report and AI-assisted drafting tools to prepare design documents; performed insufficient review of AI-generated design outputs resulting in misaligned dimensions and omitted safety features; uploaded client confidential information to a public AI interface without client consent; failed to include appropriate citations for AI-generated content.
Bore statutory responsible charge obligations over the groundwater monitoring report and design documents; failed to maintain active engagement in the design and development process by relying on AI-generated plans without comprehensive verification; did not satisfy responsible charge requirements by conducting only a high-level post-preparation review.
Retained Engineer A for environmental consulting and design services; reviewed AI-assisted design documents and identified misaligned dimensions and omitted safety features; questioned inconsistencies in the report; held confidentiality interests in information uploaded to public AI systems without consent.
Senior engineer whose absence from the project left Engineer A without proper oversight and mentorship support, contributing to Engineer A operating in a compromised manner and relying excessively on AI-generated outputs without adequate verification.
Tension between AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W and AI-Generated Work Product Disclosure Constraint Engineer A Report Submission
Tension between AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W and Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool
Tension between Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A and Confidential Client Data Input Constraint Engineer A Open-Source AI Upload
Tension between Intellectual Authorship Integrity Obligation Violated By Engineer A In Report Submission / AI-Assisted Design Comprehensive Verification Obligation Violated By Engineer A In Design Documents and Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents
Tension between Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W Design Documents and AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W
Tension between Responsible Charge Active Review Obligation Breached By Engineer A Over Design Documents / Client Data Confidentiality in AI Tool Use Violated by Engineer A / Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement and Client Consent for Third-Party Data Sharing Obligation Violated By Engineer A
Tension between AI-Generated Work Product Competence Verification Obligation and Regulatory Compliance Verification Obligation
Tension between Responsible Charge Active Review Obligation Violated By Engineer A Over Design Documents and Safety Obligation Implicated By Engineer A Omission Of Safety Features In Design Documents
Tension between AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase and Proactive AI Disclosure to Client Obligation Violated By Engineer A Toward Client W
Tension between Mentorship Succession and Peer Review Continuity Obligation Breached By Engineer A Following Engineer B Retirement and Client Data Confidentiality in AI Tool Use Violated by Engineer A
Tension between Responsible Charge Active Review Obligation — differentially met for report (thorough) and violated for design documents (cursory) and AI-Generated Work Product Competence Verification Obligation Violated By Engineer A In Design Phase
Tension between Competence Obligation Breached By Engineer A In Selection And Use Of Novel AI Drafting Tool and Regulatory Compliance Verification Obligation Violated By Engineer A In Design Documents
Tension between Mentorship Succession and Peer Review Continuity Obligation Violated By Engineer A Following Engineer B Retirement and Client Data Confidentiality in AI Tool Use Violated by Engineer A
Engineer A is obligated to comprehensively verify all AI-assisted design outputs to ensure technical accuracy and safety, yet the retirement of Engineer B (the mentor) has eliminated the peer review mechanism that would normally serve as a critical backstop for that verification. Fulfilling the verification obligation now falls entirely on Engineer A alone, but the structural constraint — the absence of a peer reviewer — makes robust, independent verification practically impossible without additional compensating measures Engineer A has not implemented. This creates a genuine dilemma: the obligation demands a standard of verification that the post-retirement environment structurally prevents from being met, and any shortfall directly threatens public safety in groundwater infrastructure design.
Engineer A bears a positive obligation to represent the true intellectual authorship of submitted work products honestly, including acknowledging AI-generated content. Simultaneously, the non-deception constraint prohibits Engineer A from misrepresenting authorship in any form. These two entities are not merely redundant — they create a dilemma when Engineer A's professional self-interest, efficiency pressures, and the absence of explicit firm or regulatory policy on AI attribution create situational incentives to allow the client to assume full human authorship. The tension is between the active duty to disclose and the passive temptation to omit, where omission itself constitutes deception. The breach already identified in the case confirms that Engineer A resolved this tension in the ethically impermissible direction, underscoring the real pull of competing pressures.
Engineer A is obligated under responsible charge to actively and substantively review all design documents bearing their seal, exercising genuine technical judgment over every element. However, the competence boundary constraint recognizes that Engineer A lacks sufficient familiarity with the novel AI drafting tool to critically evaluate whether its outputs are technically sound, algorithmically biased, or subtly erroneous. This creates a genuine dilemma: signing off on documents fulfills the procedural dimension of responsible charge but violates its substantive dimension if Engineer A cannot competently assess what the AI produced. Conversely, refusing to seal documents until competence is established would delay the project and create contractual tensions with Client W. The engineer is caught between the formal duty to be in responsible charge and the epistemic constraint that prevents that charge from being meaningfully exercised.
Opening States (10)
Key Takeaways
- Engineers must proactively disclose AI tool usage to clients, as failure to do so violates transparency obligations even when the final work product meets technical standards.
- Uploading confidential client data to open-source or third-party AI platforms without explicit client consent constitutes a breach of confidentiality obligations regardless of the engineer's intent or the quality of output produced.
- Adopting novel tools like AI drafting assistants requires engineers to first verify their own competence in critically evaluating AI-generated outputs before incorporating them into professional deliverables.