Step 4: Review
Review extracted entities and commit to OntServe
Commit to OntServe
Phase 2A: Code Provisions
code provision reference 9
Hold paramount the safety, health, and welfare of the public.
DetailsPerform services only in areas of their competence.
DetailsAvoid deceptive acts.
DetailsEngineers shall not reveal facts, data, or information without the prior consent of the client or employer except as authorized or required by law or this Code.
DetailsEngineers shall undertake assignments only when qualified by education or experience in the specific technical fields involved.
DetailsEngineers shall not affix their signatures to any plans or documents dealing with subject matter in which they lack competence, nor to any plan or document not prepared under their direction and control.
DetailsEngineers shall avoid all conduct or practice that deceives the public.
DetailsEngineers shall conform with state registration laws in the practice of engineering.
DetailsEngineers shall give credit for engineering work to those to whom credit is due, and will recognize the proprietary interests of others.
DetailsPhase 2B: Precedent Cases
precedent case reference 2
The Board cited this case to establish historical precedent for the ethical use of computer-assisted drafting and design tools, and to show the BER's longstanding openness to new technologies in engineering practice, including early anticipation of AI.
DetailsThe Board cited this case to establish that technology must never replace or substitute for engineering judgment, and to draw a parallel to Engineer A's insufficient review of AI-generated design documents, while also distinguishing Engineer A's situation by noting Engineer A is not incompetent unlike the engineer in that case.
DetailsPhase 2C: Questions & Conclusions
ethical conclusion 28
Engineer A's use of AI in report writing was partly ethical, and partly unethical.
DetailsThe use of AI-assisted drafting tools by Engineer A was not unethical per se.
DetailsSimilar to other software used in the design or detailing process, Engineer A has no professional or ethical obligation to disclose AI use to Client W (unless such disclosure is required under Engineer A’s contract with Client W).
DetailsBeyond the Board's finding that Engineer A's use of AI in report writing was partly ethical and partly unethical, a critical and independent ethical breach exists that the Board did not explicitly address: Engineer A violated the client confidentiality obligation by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining Client W's prior consent. Open-source AI platforms typically process and may retain user-submitted data in ways that expose it to third parties or incorporate it into training datasets, creating a foreseeable risk of disclosure beyond Engineer A's control. This breach of Code provision II.1.c stands entirely apart from questions about report quality, AI disclosure, or design document accuracy. A competent engineer deploying any third-party software tool - particularly a newly released, open-source platform with unknown data handling practices - bears an independent obligation to evaluate whether inputting confidential client data is permissible under the client relationship before acting. Engineer A's failure to seek Client W's consent before uploading that data constitutes a separate and self-standing ethical violation that the Board's analysis of report quality and AI transparency does not cure or subsume.
DetailsThe Board's conclusion that AI-assisted drafting tools are not unethical per se must be qualified by a competence threshold that Engineer A did not meet with respect to the design documents. Code provisions I.2 and II.2.a require that engineers perform services only within areas of their competence, and this obligation extends to the tools they deploy. When an engineer uses a novel, unfamiliar AI drafting tool - one newly released to market with no prior experience on the engineer's part - and then conducts only a cursory, high-level review of its outputs before sealing and submitting engineering design documents, the engineer has not satisfied the competence standard that makes AI tool use ethically permissible in the first place. The Board's permissive conclusion about AI drafting tools implicitly assumes that the engineer possesses sufficient understanding of the tool's capabilities, limitations, and failure modes to exercise meaningful professional judgment over its outputs. Engineer A lacked that understanding entirely. The resulting design documents contained misaligned dimensions and omitted safety features required by local regulations - defects that a competent, engaged review would have identified. Accordingly, the ethical permissibility of AI-assisted drafting tools is conditional, not categorical: it depends on whether the engineer has sufficient competence with the tool and applies sufficient verification rigor to maintain genuine responsible charge over the work product.
DetailsThe Board's conclusion that Engineer A has no universal ethical obligation to disclose AI use to Client W - analogizing AI tools to other engineering software - requires significant qualification in light of the specific facts of this case and must not be read as a blanket rule. The analogy to conventional engineering software breaks down in at least three respects. First, conventional design software such as CAD or finite element analysis tools operates deterministically on engineer-supplied inputs and produces outputs the engineer can fully audit; large language model AI generates probabilistic, non-deterministic text and design content whose provenance and accuracy the engineer cannot fully trace or verify. Second, the observable stylistic discontinuity in the report - which Client W independently detected, noting it read as if written by two different authors - created an implicit misrepresentation about the nature of the work product and its authorship. At the moment Client W raised that observation, Engineer A's silence became an act of omission that a reasonable client would regard as misleading, implicating Code provisions I.5 and III.3. Third, the design document defects - misaligned dimensions and omitted safety features - demonstrate that undisclosed AI-generated outputs in this case did reach a client and could have proceeded to construction without correction absent Client W's independent review. The Board's no-disclosure-obligation conclusion is therefore defensible only in circumstances where the engineer has exercised thorough, competent review of AI outputs and where no client inquiry or observable anomaly has created an affirmative duty to speak. In this case, neither condition was fully satisfied for the design documents, and the stylistic anomaly in the report created a specific moment at which silence was ethically problematic.
DetailsThe Board's analysis does not address a systemic professional vulnerability exposed by this case: Engineer A's over-reliance on AI tools was directly precipitated by the absence of the peer review and mentorship previously provided by Engineer B. When Engineer B retired, Engineer A lost not merely editorial guidance on technical writing but a substantive quality assurance mechanism that had been integral to Engineer A's professional practice. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, a professional review service, or a subconsultant - Engineer A substituted an unfamiliar AI tool for that oversight function. This substitution was ethically inadequate for two independent reasons. First, AI tools are not peer reviewers: they do not apply independent professional judgment, cannot identify regulatory non-compliance from contextual knowledge, and cannot assume professional responsibility for the work. Second, the substitution required uploading confidential client data to an open-source platform, compounding the ethical problem. Code provision II.2.a's competence obligation and the broader duty of diligence implicit in responsible charge together suggest that when an engineer's established quality assurance mechanism becomes unavailable, the engineer bears an affirmative obligation to arrange a functionally equivalent alternative before undertaking complex, high-stakes engagements - not to proceed with an untested technological substitute. The NSPE Code of Ethics does not currently provide explicit guidance on peer review succession planning, and this case illustrates that such guidance would meaningfully serve the profession.
DetailsThe Board's finding that Engineer A's use of AI was partly unethical with respect to the design documents is further supported by the public safety dimension that the Board did not fully develop. Code provision I.1 places the safety, health, and welfare of the public as the paramount obligation of a licensed engineer, and this obligation is not merely aspirational - it is the foundational constraint against which all other professional judgments must be measured. The AI-generated design documents submitted by Engineer A contained omitted safety features required by local regulations. These omissions were not caught by Engineer A's cursory review and were only identified by Client W. Had Client W not conducted an independent technical review, those deficient documents could have proceeded to construction, creating a direct risk to public safety. The fact that the error was caught before construction does not retroactively satisfy the responsible charge standard; the standard requires that the engineer's own review be sufficient to ensure compliance, not that a client's independent review serve as the final safety check. Engineer A's sealing of documents containing regulatory safety omissions - after only a cursory review - therefore implicates not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of Code provision I.1. The ethical violation in the design phase is accordingly more serious than a mere procedural lapse in review thoroughness: it represents a failure of the core public protection function that professional licensure exists to serve.
DetailsEngineer A's failure to cite the professional journal articles used to cross-check AI-generated content, and the absence of any attribution for the AI-generated text itself, raises an underexamined concern about the evidentiary integrity of a technical report that may inform regulatory decisions or remediation actions. Code provision III.9 requires engineers to give credit for engineering work to those to whom credit is due. While this provision is most commonly applied to prevent engineers from claiming credit for others' work, it also carries an affirmative dimension: a technical report submitted in a professional capacity implicitly represents that its intellectual content reflects the engineer's own analysis and judgment. Where substantial portions of the report's prose and synthesis were generated by an AI system, and where the factual cross-checking relied on professional journal articles that are not cited, the report's evidentiary foundation is obscured. Regulators, future engineers, or legal proceedings relying on the report cannot assess the quality of the underlying analysis, trace its sources, or evaluate the reliability of the AI-generated synthesis. This is particularly consequential for a report addressing an emerging contaminant of concern, where the scientific basis for conclusions may be contested and where the report may serve as a foundational document for remediation planning or regulatory compliance. The absence of attribution and citation therefore undermines not only intellectual honesty in authorship but also the professional reliability and traceability of the work product itself.
DetailsIn response to Q101: Engineer A's upload of Client W's confidential site data and groundwater monitoring information into an open-source AI platform constitutes an independent and discrete ethical violation of Code provision II.1.c, entirely separate from any question about report quality or AI disclosure. The confidentiality obligation is not contingent on whether the resulting work product is accurate, polished, or ultimately beneficial to the client. By inputting proprietary client data into a publicly accessible AI system without obtaining Client W's prior consent, Engineer A exposed that information to potential third-party access, retention, or reuse by the AI platform - consequences Engineer A could not control or fully anticipate, particularly given their admitted unfamiliarity with the software. This breach stands on its own ethical foundation: the harm is the unauthorized exposure itself, not merely any downstream misuse. A competent engineer deploying a novel open-source tool with client data bears an affirmative obligation to investigate the data handling, storage, and privacy policies of that tool before use, and to obtain explicit client consent if any confidential information will be transmitted to a third-party system. Engineer A did neither. This violation is not remediated by the thoroughness of the subsequent report review, by the accuracy of the final work product, or by any disclosure or non-disclosure decision regarding AI authorship.
DetailsIn response to Q102: Engineer B's retirement did not merely create an inconvenience for Engineer A - it removed the primary quality assurance mechanism upon which Engineer A had structurally depended for professional-grade output, particularly in technical writing. When that mechanism was removed, Engineer A faced a dual-scope engagement of meaningful complexity: a comprehensive contaminant characterization report requiring synthesis of groundwater monitoring data, and engineering design documents for infrastructure modifications. Rather than arranging an alternative peer review process - such as engaging a qualified colleague, contracting a third-party reviewer, or consulting with a professional organization - Engineer A substituted an unfamiliar, newly released open-source AI tool for that professional oversight. This substitution was not ethically neutral. The NSPE Code's competence provisions (I.2 and II.2.a) require engineers to undertake assignments only when qualified, and qualification encompasses not only technical domain knowledge but also the professional infrastructure necessary to deliver work of adequate quality. An engineer who knows they have a recognized weakness in a critical deliverable component, who has lost their primary quality assurance resource, and who then deploys an untested tool as a replacement - without any independent verification of that tool's reliability - has not satisfied the competence standard. Engineer A had an independent ethical obligation to arrange alternative peer review before proceeding, and the failure to do so compounded every subsequent deficiency in both the report and the design documents.
DetailsIn response to Q103: When Client W directly observed that the report appeared to have been written by two different authors - a stylistically inconsistent observation that was, in fact, an accurate description of the report's dual-origin nature - Engineer A's silence in that moment was not ethically neutral. Code provisions I.5 and III.3 prohibit deceptive acts and conduct that deceives the public or clients. Deception does not require an affirmative false statement; it can arise from deliberate silence in circumstances where a reasonable client would expect disclosure and where the omission creates or sustains a materially false impression. Client W's comment was a direct, specific observation that implicitly invited clarification about the report's authorship. A client who is told their report reads as if written by two people is, in practical terms, asking why. Engineer A's failure to respond honestly - by acknowledging that AI software had generated the more polished sections - allowed Client W to proceed under the false impression that the entire report was the product of Engineer A's own professional authorship. This silence, in context, constitutes a deceptive act under I.5 and conduct that deceives under III.3, independent of whether disclosure was required before submission. The moment of Client W's observation created a discrete, time-specific obligation to clarify, and Engineer A's failure to do so transformed a prior omission into an active, ongoing misrepresentation.
DetailsIn response to Q104: Engineer A's failure to cite the professional journal articles used to cross-check AI-generated content raises a concern under Code provision III.9, which requires engineers to give credit for engineering work to those to whom credit is due. While III.9 is most commonly applied to crediting the work of other engineers, its underlying principle - that the intellectual and evidentiary foundations of professional work must be honestly attributed - extends to the sources that substantiate technical conclusions. In a report that may inform regulatory decisions or remediation actions affecting public health and environmental safety, the absence of citations to the scientific literature used to verify AI-generated claims is not merely a stylistic deficiency. It deprives Client W, regulators, and any subsequent reviewers of the ability to independently assess the evidentiary basis for the report's conclusions, to identify the scope and currency of the literature consulted, and to evaluate whether the cross-checking process was adequate. This omission undermines the epistemic integrity of the report as a professional document. Furthermore, in the context of an emerging contaminant of concern - a category of substance where scientific understanding is actively evolving - the failure to ground conclusions in cited, verifiable sources creates a foreseeable risk that outdated, incomplete, or AI-hallucinated information could go undetected by downstream users who rely on the report's apparent professional authority.
DetailsIn response to Q201: A genuine tension exists between the principle that professional competence in report writing can be satisfied through thorough post-generation verification and the principle of intellectual honesty in authorship. The Board concluded that Engineer A's thorough review of the AI-generated report text was sufficient to render that use of AI ethical. However, this conclusion does not fully resolve the authorship integrity question. When an engineer applies their professional seal to a document, they represent to the client and to the public that the work reflects their professional judgment, expertise, and authorship. The seal is not merely a quality certification - it is an assertion of intellectual ownership and responsible charge. A report whose prose was substantially composed by a non-human language model, and whose authorship was personalized only through minor wording adjustments, does not straightforwardly satisfy that representation, even if every factual claim has been verified. The verification process confirms accuracy; it does not transform AI-generated prose into the engineer's own professional expression. These two principles can be reconciled only if the engineering profession explicitly adopts a framework - which it has not yet done - that defines AI-assisted authorship as a recognized and disclosed mode of professional work product creation. Absent such a framework, the tension remains unresolved, and the Board's conclusion on report ethics should be understood as provisional rather than definitive.
DetailsIn response to Q202: The tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption is not merely theoretical - it is demonstrated concretely by the outcome in this case. Engineer A applied their professional seal to AI-generated design documents after only a cursory, high-level review. The professional seal carries a legal and ethical certification that the engineer has exercised responsible charge over the work: that they understand its content, have directed its preparation, and can stand behind its technical adequacy. A cursory review of output generated by a novel AI drafting tool - one with which Engineer A had no prior experience and whose generative logic Engineer A did not fully understand - cannot satisfy that standard. The subsequent discovery of misaligned dimensions and omitted safety features required by local regulations confirms that the cursory review was substantively inadequate. Code provision II.2.b prohibits engineers from affixing their signatures to plans dealing with subject matter in which they lack competence. Competence here encompasses not only domain knowledge in groundwater infrastructure design, but also sufficient understanding of the AI tool's outputs to certify their reliability. Engineer A possessed the former but demonstrably lacked the latter. The seal, in this context, was affixed in violation of II.2.b, and the tension between these two principles is resolved against Engineer A: responsible charge cannot be satisfied by reviewing outputs from a tool whose behavior the reviewing engineer does not adequately understand.
DetailsIn response to Q204: The Board's conclusion that there is no universal ethical obligation to disclose AI use is placed under significant strain by the facts of this case. The principle that public welfare is paramount - Code provision I.1 - is not merely aspirational; it functions as a constraint on every other professional decision an engineer makes. In this case, AI-generated design documents containing omitted safety features required by local regulations were submitted to Client W under Engineer A's professional seal. Had Client W not independently identified these deficiencies, the documents could have proceeded toward construction in a non-compliant and potentially dangerous state. The Board's general conclusion about disclosure is grounded in an analogy to other software tools used in engineering practice - an analogy that may hold when the tool is well-understood, widely validated, and used within established professional norms. It does not hold with equal force when the tool is newly released, unfamiliar to the practitioner, and demonstrably capable of generating safety-critical omissions that a cursory review failed to catch. In such circumstances, the public welfare principle does not merely permit disclosure - it may affirmatively require it, because disclosure enables the client and downstream reviewers to apply appropriate scrutiny to outputs whose reliability has not been professionally validated. The Board's conclusion on disclosure should therefore be understood as conditional: it applies when AI tools are used competently and their outputs are rigorously verified, not when they are deployed as substitutes for professional judgment with only superficial review.
DetailsIn response to Q301: From a deontological perspective, Engineer A did not fulfill their duty of candor toward Client W. Kantian deontological ethics evaluates the moral worth of an action by reference to the maxim underlying it and whether that maxim could be universalized without contradiction. The maxim implicit in Engineer A's conduct - that an engineer may submit AI-generated work products under their professional seal without disclosing the AI's role, provided the outputs are verified for accuracy - cannot be universalized without undermining the foundational trust relationship between licensed professionals and their clients. If all engineers adopted this maxim, the professional seal would cease to function as a reliable signal of personal authorship and responsible charge, and clients would be systematically deprived of information material to their assessment of the work product's provenance and reliability. Furthermore, the duty of candor is not contingent on outcome: it is not satisfied by the fact that the report was accurate or that the design errors were caught. Deontological ethics holds that the duty to be honest with those who rely on one's professional representations exists independently of whether the deception caused harm. Engineer A's silence about AI's role - particularly in the face of Client W's direct observation about the report's stylistic inconsistency - constitutes a breach of the duty of candor that is not remediated by the quality of the final work product.
DetailsIn response to Q302: From a deontological perspective, Engineer A breached their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed. Responsible Charge is not a procedural formality - it is a substantive professional and ethical duty that requires the engineer to have directed the work, to understand its content, and to be able to certify its technical adequacy. The professional seal is the outward expression of that duty, and affixing it to documents that have not been adequately reviewed is a categorical violation regardless of intent or outcome. From a deontological standpoint, the duty is breached at the moment of sealing, not at the moment of harm. The fact that Client W identified the errors before construction does not retroactively satisfy the Responsible Charge obligation; it merely prevented the consequences from being worse. Code provision II.2.b makes this categorical character explicit: engineers shall not affix their signatures to plans dealing with subject matter in which they lack competence. Engineer A's unfamiliarity with the AI drafting tool's outputs, combined with a cursory review that failed to detect regulatory non-compliance, establishes that the competence threshold was not met at the time of sealing. The deontological analysis therefore yields a clear conclusion: the duty was breached, independently of any consequentialist assessment of harm.
DetailsIn response to Q303: From a consequentialist perspective, the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features required by local regulations - does outweigh the efficiency benefits gained from using AI-assisted drafting tools in the design phase, and this outcome is ethically significant even though the errors were caught before construction. Consequentialist analysis evaluates actions by their expected outcomes, including foreseeable risks. A competent engineer deploying a novel, untested AI drafting tool for safety-critical infrastructure design, with no prior experience and only a cursory review process, creates a foreseeable probability of undetected errors reaching construction. The actual outcome - regulatory non-compliance and safety omissions - was not an improbable accident; it was a predictable consequence of an inadequate verification process applied to an unreliable generative tool. The efficiency gain from AI-assisted drafting is real but modest relative to the risk: the time saved in initial document generation was offset by the need for revision, the erosion of client trust, and the potential - had Client W not been diligent - for construction of non-compliant infrastructure. Consequentialist ethics does not require that harm actually occur to render a decision unethical; it requires that the expected value of the action, accounting for foreseeable risks, be negative. Here, the expected value of deploying an unfamiliar AI tool with cursory review for safety-critical design work was negative at the time of the decision, and the actual outcome confirms that assessment.
DetailsIn response to Q304: From a virtue ethics perspective, Engineer A did not demonstrate the professional integrity and intellectual honesty expected of a licensed engineer in the report authorship process. Virtue ethics evaluates conduct by reference to the character traits and dispositions that a person of practical wisdom - a phronimos - would exhibit in the relevant professional role. A licensed engineer of good character, confronted with a recognized weakness in technical writing and the loss of their primary quality assurance resource, would seek transparent solutions: engaging a peer reviewer, disclosing limitations to the client, or explicitly attributing AI assistance in the work product. Engineer A instead chose a path that preserved the appearance of unassisted professional authorship while relying substantially on AI-generated prose. The minor wording adjustments made to personalize the content do not constitute the kind of intellectual engagement that transforms another's expression into one's own. A person of practical wisdom would recognize that submitting AI-generated text under a professional seal - without attribution, and in the face of a client's direct observation about stylistic inconsistency - is not merely a procedural omission but a failure of intellectual honesty. The virtue of integrity requires consistency between one's professional representations and the actual nature of one's work. Engineer A's conduct fell short of that standard, regardless of the report's factual accuracy.
DetailsIn response to Q305: From a virtue ethics perspective, Engineer A did not exhibit the prudence and professional humility expected of a competent engineer in choosing to deploy a novel, unfamiliar AI drafting tool as a substitute for the mentorship and peer review previously provided by Engineer B. Prudence - the virtue of practical wisdom applied to professional decision-making - requires an engineer to accurately assess their own capabilities and limitations, to recognize the boundaries of their competence, and to seek appropriate resources when those boundaries are approached. Engineer A's self-acknowledged weakness in technical writing, combined with the loss of Engineer B's quality assurance function, created a situation that called for heightened caution and deliberate compensatory measures. Instead, Engineer A responded by introducing a second source of uncertainty: an AI tool that was new to the market, open-source, and entirely unfamiliar to Engineer A. Professional humility would have led Engineer A to recognize that substituting one unknown - AI-generated output - for a known quality assurance resource - Engineer B's expert review - does not reduce professional risk; it compounds it. A prudent engineer in Engineer A's position would have sought an alternative qualified peer reviewer, disclosed the limitation to Client W, or scoped the engagement to match their verified capabilities. The choice to proceed without these safeguards reflects not merely a procedural lapse but a deficit in the practical wisdom that the engineering profession requires of its licensed practitioners.
DetailsIn response to Q306: From a consequentialist perspective, Engineer A's decision to input Client W's confidential site data into open-source AI software without prior consent created a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and that risk calculus should have been apparent to a competent engineer before acting. Open-source AI platforms are, by their nature, systems whose data handling, retention, training data incorporation, and third-party access policies are not under the control of the user. A competent engineer - particularly one engaged in environmental consulting involving site-specific groundwater data that may have regulatory, litigation, or competitive sensitivity - bears a professional obligation to investigate how any third-party system will handle client data before transmitting it. The efficiency benefit of AI-assisted drafting is real but bounded: it accelerates initial document generation. The risk created by uploading confidential client data to an unvetted public platform is potentially unbounded: it includes regulatory exposure, competitive harm, litigation risk, and reputational damage to Client W. A consequentialist analysis that assigns even a modest probability to these harms - and a competent engineer should have assigned a non-trivial probability - yields a negative expected value for the decision to use open-source AI without consent. The fact that Engineer A was unfamiliar with the AI software's full functionality, including its data handling practices, does not mitigate this conclusion; it reinforces it, because proceeding under conditions of ignorance about foreseeable risks is itself a consequentialist failure.
DetailsIn response to Q401: If Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, Engineer A would have faced a clear ethical fork: either decline the use of open-source AI tools entirely, or identify a privacy-compliant alternative - such as an enterprise AI system with contractual data protection guarantees, or a locally deployed model with no external data transmission. The obligation to decline would not have been absolute; it would have been an obligation to find a compliant solution or to proceed without AI assistance. This counterfactual illuminates a structural point: the ethical failure was not the decision to use AI per se, but the decision to use a specific category of AI tool - open-source, publicly accessible - without first obtaining client consent for the data exposure that use necessarily entailed. Had Engineer A followed the disclosure-and-consent pathway, the subsequent work product might have been produced differently - perhaps with a privacy-compliant AI tool, perhaps without AI assistance at all - but the client relationship and the engineer's ethical standing would have been preserved. The counterfactual also suggests that the Board's conclusion that AI use is not unethical per se should be understood as conditional on the use of appropriate tools under appropriate consent frameworks, not as a blanket endorsement of any AI tool for any purpose.
DetailsIn response to Q402: If Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent in thoroughness to the review applied to the report - the safety omissions and dimensional errors would very likely have been identified before submission to Client W, and such a review would have been substantially more likely to satisfy the Responsible Charge standard. The case establishes a clear asymmetry: Engineer A's thorough review of the report was sufficient to catch factual inaccuracies and verify content quality, while the cursory review of the design documents was not sufficient to detect regulatory non-compliance and dimensional errors. This asymmetry suggests that the review process, not the use of AI per se, was the determinative variable in the design document failure. A rigorous review - one that checked each dimension against site survey data, verified each specification against local regulatory requirements, and confirmed the presence of all required safety features - would have functioned as an adequate Responsible Charge mechanism even for AI-generated outputs, provided the reviewing engineer possessed the domain competence to evaluate what they were reviewing. Engineer A possessed that domain competence in groundwater infrastructure design. The ethical failure was therefore not the use of AI drafting tools, but the decision to apply a cursory rather than rigorous review standard to safety-critical outputs from an untested tool. This counterfactual reinforces the Board's conclusion that AI-assisted drafting is not unethical per se, while clarifying that the adequacy of the review process is the critical ethical variable.
DetailsIn response to Q404: If Engineer A had explicitly cited the use of AI software in the report - identifying which sections were AI-generated and which were independently authored - Client W's observation that the report read as if written by two different authors would have been resolved rather than raised as a concern. The stylistic inconsistency that Client W detected was, in fact, an accurate artifact of the report's dual-origin nature: AI-generated prose tends to exhibit a characteristic uniformity and polish that differs from the more variable style of human technical writing, particularly from an engineer who self-identifies as less confident in technical writing. Explicit attribution would have provided Client W with a framework for understanding and contextualizing that observation, transforming a source of unease into a transparent feature of the work product. However, explicit attribution would also have raised a different set of questions: it would have invited Client W to scrutinize the AI-generated sections more carefully, to inquire about the AI tool's data handling practices, and potentially to raise concerns about the confidential data exposure that had already occurred. In this sense, disclosure would have been simultaneously clarifying and consequential - it would have resolved the authorship ambiguity while potentially surfacing the deeper confidentiality violation. This counterfactual suggests that the ethical case for disclosure is stronger than the Board's agnostic conclusion implies: transparency about AI use not only serves intellectual honesty but also enables clients to exercise informed oversight of work products that may have been generated under conditions they would not have approved.
DetailsThe tension between Professional Competence Satisfied for Report Writing and Intellectual Honesty in Authorship was left substantively unresolved by the Board. The Board accepted that Engineer A's thorough factual verification of AI-generated text satisfied the competence dimension of responsible charge for the report, but it did not squarely confront the authorship dimension: when an engineer personalizes AI-generated prose with only minor wording adjustments and submits it under a professional seal without attribution, the seal implicitly represents that the engineer is the intellectual author of the work product. These two principles pull in opposite directions - competence review can be satisfied by rigorous fact-checking, but intellectual honesty in authorship requires that the origin of the substantive prose be accurately represented. The case teaches that competence and authorship are distinct professional obligations, and that satisfying one does not discharge the other. A fully ethical resolution would have required Engineer A to either disclose the AI's generative role or to rewrite the report in their own voice after verification, rather than treating minor wording edits as sufficient to claim authorship.
DetailsThe tension between Responsible Charge Engagement and Competence Assurance Under Novel Tool Adoption was resolved against Engineer A in the design document context, but the resolution reveals a deeper principle hierarchy: when an engineer applies a professional seal, the seal does not merely certify that the engineer reviewed the output - it certifies that the engineer exercised personal, informed judgment over the generative process itself. Because Engineer A had no prior experience with the AI drafting tool and did not understand its full functionality, a cursory high-level review was structurally incapable of satisfying responsible charge, regardless of how much time was spent. The case teaches that the standard of review required to satisfy responsible charge scales inversely with the engineer's familiarity with the generative tool: the less the engineer understands how the tool produces its output, the more rigorous the independent verification must be. Deploying an unfamiliar AI tool is not ethically equivalent to deploying familiar software; it introduces an epistemic gap that only deeper review - not a high-level scan - can close. Public Welfare Paramount ultimately overrides both efficiency and tool novelty as a justification for reduced oversight, particularly where safety-critical omissions in design documents could reach construction.
DetailsThe tension between Client Data Confidentiality in AI Tool Use and Mentorship Continuity and Succession Planning exposes a systemic vulnerability that the Board's conclusions do not address: Engineer A's loss of Engineer B's peer review created professional pressure to substitute AI assistance for human oversight, but the only available AI tool was open-source, meaning that satisfying the need for quality assurance necessarily required exposing Client W's confidential site data and groundwater monitoring information to a public platform without prior consent. This creates a structural conflict in which the engineer cannot simultaneously honor the confidentiality obligation and use the available compensating mechanism. The case teaches that this conflict is not resolvable by choosing one principle over the other after the fact - it is resolvable only by proactive planning before the engagement begins. The principle of Mentorship Continuity and Succession Planning, read alongside the confidentiality obligation under Code provision II.1.c, implies that when a primary quality assurance mechanism is lost, the engineer's first obligation is to identify a compliant replacement - whether a qualified peer reviewer, a privacy-compliant AI platform, or a scope limitation - before accepting work that cannot be competently and confidentially performed alone. Engineer A's failure to engage in that prior planning rendered the confidentiality breach not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice.
Detailsethical question 21
Was Engineer A’s use of AI to create the report text ethical, given that Engineer A thoroughly checked the report?
DetailsWas Engineer A’s use of AI-assisted drafting tools to create the engineering design documents ethical, given that Engineer A reviewed the design at a high level?
DetailsIf the use of AI was acceptable, did Engineer A have an ethical obligation to disclose the use of AI in any form to the Client?
DetailsBy uploading Client W's confidential site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c, and does this violation stand as a separate ethical breach from any question about AI disclosure or report quality?
DetailsGiven that Engineer B's retirement removed the primary quality assurance mechanism Engineer A had relied upon, did Engineer A have an independent ethical obligation to arrange an alternative peer review process before undertaking a complex, dual-scope engagement involving an unfamiliar AI tool, rather than substituting AI-generated output for that professional oversight?
DetailsWhen Client W observed that the report read as if written by two different authors, did Engineer A incur an immediate ethical obligation to proactively disclose the AI's role in drafting the more polished sections, or was silence in that moment itself a deceptive act under Code provisions I.5 and III.3?
DetailsDoes Engineer A's failure to include citations to the professional journal articles used to cross-check AI-generated content constitute a violation of the obligation to give credit for engineering work under Code provision III.9, and does it additionally undermine the evidentiary foundation of a technical report that may inform regulatory or remediation decisions?
DetailsDoes the principle of Professional Competence Satisfied for Report Writing conflict with the principle of Intellectual Honesty in Authorship when Engineer A's thorough factual verification of AI-generated text is used to justify sealing a report whose prose was substantially composed by a non-human system, potentially misrepresenting the nature and origin of the professional work product to Client W?
DetailsDoes the principle of Responsible Charge Engagement conflict with the principle of Competence Assurance Under Novel Tool Adoption when an engineer applies their professional seal to AI-generated design documents after only a cursory review, given that the seal legally certifies personal responsible charge over work whose generative process the engineer does not fully understand?
DetailsDoes the principle of Client Data Confidentiality in AI Tool Use conflict with the principle of Mentorship Continuity and Succession Planning when an engineer, deprived of a trusted peer reviewer, turns to an open-source AI platform as a substitute quality assurance mechanism, thereby necessarily exposing confidential client data to a third-party system in order to compensate for the loss of professional oversight?
DetailsDoes the principle of Public Welfare Paramount conflict with the principle of AI Tool Transparency and Disclosure Applied to Client W Relationship when the Board concludes there is no universal ethical obligation to disclose AI use, yet the case demonstrates that undisclosed AI-generated design documents containing safety-critical omissions were submitted to a client and could have reached construction without correction had Client W not independently identified the defects?
DetailsFrom a deontological perspective, did Engineer A fulfill their duty of candor toward Client W by submitting AI-generated work products without disclosure, regardless of whether the final outputs were accurate?
DetailsFrom a deontological perspective, did Engineer A breach their categorical duty to maintain Responsible Charge by sealing engineering design documents that contained safety omissions and dimensional errors they had only cursorily reviewed?
DetailsFrom a consequentialist perspective, did the harm produced by Engineer A's cursory review of AI-generated design documents - resulting in misaligned dimensions and omitted safety features - outweigh any efficiency benefits gained from using AI-assisted drafting tools, and does this outcome retroactively render the decision to use those tools unethical?
DetailsFrom a virtue ethics perspective, did Engineer A demonstrate the professional integrity and intellectual honesty expected of a licensed engineer by personalizing AI-generated report text with only minor wording adjustments and presenting it under their professional seal without attribution, even if the content was factually verified?
DetailsFrom a virtue ethics perspective, did Engineer A exhibit the prudence and professional humility expected of a competent engineer by choosing to deploy a novel, unfamiliar AI drafting tool - with no prior experience - as a substitute for the mentorship and peer review previously provided by Engineer B, rather than seeking alternative qualified oversight?
DetailsFrom a consequentialist perspective, did Engineer A's decision to input Client W's confidential site data into open-source AI software - without obtaining prior consent - create a foreseeable risk of harm to Client W's proprietary interests that outweighs the drafting efficiency gained, and should that risk calculus have been apparent to a competent engineer before acting?
DetailsIf Engineer A had disclosed their intended use of open-source AI software to Client W before beginning work, and Client W had withheld consent to upload confidential site data to a public AI platform, would Engineer A have been obligated to decline the use of AI tools entirely or to seek a privacy-compliant alternative, and how would that have affected the deliverables?
DetailsIf Engineer A had conducted a rigorous, line-by-line technical review of the AI-generated design documents - equivalent to the thorough review applied to the report - rather than a cursory high-level check, would the safety omissions and dimensional errors have been caught before submission to Client W, and would that level of review have been sufficient to satisfy the Responsible Charge standard?
DetailsIf Engineer B had not retired and had continued to provide quality assurance review of Engineer A's work products, would Engineer A have been less likely to over-rely on AI tools, and does the absence of mentorship create a systemic professional vulnerability that the NSPE Code of Ethics should address through explicit guidance on peer review succession planning?
DetailsIf Engineer A had explicitly cited the use of AI software in the report - including identifying which sections were AI-generated and which were independently authored - would Client W's observation that the report 'read as if written by two different authors' have raised or resolved concerns about the reliability and professional accountability of the work product?
DetailsPhase 2E: Rich Analysis
causal normative link 6
Engineer A's decision to select an AI drafting tool to compensate for self-assessed technical writing limitations, without prior experience or mentor oversight, violates the competence obligation while being constrained by the boundary of professional competence and the absence of peer review succession following Engineer B's retirement.
DetailsUploading Client W's confidential information into a publicly accessible open-source AI interface directly violates the obligation to obtain client consent before sharing data with third parties and breaches the principle of client data confidentiality, which constrains how AI tools may be used in professional engagements.
DetailsAlthough Engineer A conducted a thorough review of the AI-generated report draft, partially satisfying responsible charge and competence verification obligations, the review still failed to address attribution and citation integrity requirements, leaving intellectual authorship obligations violated.
DetailsSubmitting the AI-drafted report to Client W without disclosing the use of AI tools or providing proper attribution violates multiple disclosure, transparency, and intellectual authorship obligations, all of which are directly constrained by professional norms requiring proactive client communication about AI involvement.
DetailsEngineer A's use of an unfamiliar open-source AI tool to generate groundwater infrastructure design documents, without adequate verification or responsible charge oversight, produced non-compliant documents with dimensional errors and omitted safety features, violating the broadest set of safety, regulatory, competence, disclosure, and engineering judgment obligations in the case.
DetailsBy performing only a cursory review of AI-generated design documents rather than a thorough, responsible-charge-level verification, Engineer A failed to detect misaligned dimensions and omitted safety features required by local regulations, thereby violating multiple obligations related to responsible charge, regulatory compliance, public safety, and the non-substitution of engineering judgment with AI output.
Detailsquestion emergence 21
The question emerged because Engineer A's thorough review created a plausible claim of responsible charge compliance, yet the AI's role as primary text generator created a simultaneous claim of authorship misrepresentation, placing two legitimate ethical warrants in direct tension over the same set of facts. The stylistic inconsistency detected by the client further destabilized the assumption that review alone resolved the authorship question.
DetailsThe question arose because the combination of an unfamiliar AI tool, a complex dual-scope engagement, and only cursory review produced design documents with safety-critical defects, forcing a determination of whether the ethical failure lay in the choice to use AI at all, in the inadequacy of the review, or in both simultaneously. The discovery of non-compliant outputs after submission made the adequacy of Engineer A's responsible charge engagement impossible to defend on the facts.
DetailsThe question emerged precisely because it is conditional: even granting the permissibility of AI use, a separate normative question remains about whether the client relationship and the engineer's transparency obligations independently require disclosure of AI's generative role. The stylistic inconsistency detected by the client made the undisclosed AI contribution a discovered rather than volunteered fact, sharpening the question of whether proactive disclosure was ethically required.
DetailsThe question emerged because Engineer A's upload of confidential client data to a public AI platform created a potential confidentiality breach that is factually and normatively distinct from the questions of AI disclosure and output quality, requiring determination of whether Code provision II.1.c is triggered by data input into an AI system and whether that breach stands independently of all other ethical issues. The open-source nature of the platform-implying potential data retention or public accessibility-made the confidentiality harm concrete rather than merely theoretical.
DetailsThe question emerged because Engineer B's retirement transformed what had been a structural quality assurance feature of Engineer A's practice into a gap that Engineer A filled with an untested AI tool rather than an alternative human review mechanism, raising the question of whether the ethical obligation to maintain competent oversight required affirmative succession planning rather than passive reliance on a tool whose outputs proved defective. The subsequent discovery of safety-critical design errors made the absence of peer review causally significant, not merely procedurally relevant.
DetailsThis question emerged because the data event of Client W detecting a stylistic inconsistency created a concrete moment at which the gap between Engineer A's internal knowledge and Client W's understanding became visible, forcing a collision between the warrant authorizing silence when competent review has occurred and the warrant demanding proactive honesty when a client's trust is demonstrably at risk. The question could not arise before that detection event because prior to it no external signal had surfaced the undisclosed AI contribution.
DetailsThis question arose because the data configuration-AI-generated prose validated against uncited journal sources and submitted as a professional report-simultaneously satisfies the surface form of competent engineering review while leaving the evidentiary chain invisible to Client W and any downstream regulatory or remediation decision-maker. The tension between the diligence warrant and the attribution warrant could not be resolved by the facts alone because Engineer A's verification activity occupied an ambiguous space between private quality control and substantive intellectual reliance.
DetailsThis conflict question emerged because the same factual act-thorough verification-simultaneously satisfies one principle and is invoked to justify conduct that violates another, creating a genuine normative collision rather than a simple breach. The question could not be dissolved by adding more facts because the two principles operate on different dimensions of what a professional seal communicates: technical reliability versus authorial integrity.
DetailsThis conflict question emerged because the introduction of a novel AI drafting tool created a situation where two obligations that normally reinforce each other-responsible charge and competence assurance-came into tension over what each independently requires: responsible charge asks whether the review was adequate, while competence assurance asks whether the engineer was qualified to conduct any review of this tool's outputs at all. The discovery of dimensional errors and safety omissions in the design documents provided the concrete data that neither obligation was satisfied, forcing the question of which failure is primary.
DetailsThis conflict question emerged because Engineer B's retirement created a structural gap in Engineer A's quality assurance infrastructure at precisely the moment when a novel and high-stakes AI tool was being deployed, forcing Engineer A into a situation where fulfilling one professional obligation (maintaining quality oversight) appeared to require breaching another (protecting client confidentiality). The question arose not from a single bad decision but from the intersection of an organizational event (retirement) and a technological choice (open-source AI), which together produced a dilemma that neither obligation alone could resolve.
DetailsThis question emerged because the Board's ruling created a logical gap: by declining to establish a universal AI disclosure obligation, it left unresolved whether the causal chain from undisclosed AI use to safety-critical omissions to near-construction submission constitutes a violation of Public Welfare Paramount independently of any transparency duty. The question forces examination of whether two principles that the Board treated as analytically separable are in fact structurally interdependent when the mechanism of harm runs directly through the undisclosed AI process.
DetailsThis question arose because deontological duty of candor contains an internal ambiguity: it is unclear whether the duty runs to the accuracy of outputs (which Engineer A satisfied through thorough review) or to the transparency of process (which Engineer A violated by omitting AI attribution). The factual accuracy of the report makes it impossible to resolve the question by appeal to harm alone, forcing a purely structural analysis of what candor categorically requires.
DetailsThis question emerged because the act of sealing is legally and ethically binary - the seal is either properly affixed or it is not - yet the standard of review required to justify sealing is not explicitly quantified, leaving open whether 'cursory review' falls below the categorical threshold or merely below best practice. The discovery of actual safety omissions and dimensional errors in sealed documents sharpens this ambiguity into a direct question about whether the breach was in the process or the outcome.
DetailsThis question arose because consequentialism's standard unit of analysis - the action and its consequences - is ambiguous when the action is composite: adopting the AI tool, deploying it without adequate competence, and reviewing its outputs cursorily are three distinct acts with potentially different ethical valences. The near-miss outcome forces the question of whether consequentialist condemnation attaches to the entire chain or only to the link where the engineer's judgment was most deficient.
DetailsThis question emerged because virtue ethics evaluates character rather than acts or outcomes, yet the relevant virtues - integrity, honesty, and competence - point in different directions when applied to AI-assisted authorship: the engineer demonstrated competence through verification but potentially undermined integrity through non-attribution. The stylistic inconsistency that revealed the AI origin makes the question concrete by showing that the professional presentation was not seamlessly authentic, forcing examination of what intellectual honesty categorically demands of a licensed professional.
DetailsThis question emerged because Engineer B's retirement removed the established quality assurance structure precisely when Engineer A introduced an unfamiliar AI tool, creating a compounded professional vulnerability that virtue ethics frames as a failure of prudence and humility. The question crystallizes because the data shows two simultaneous structural changes - loss of mentorship and adoption of novel technology - each of which independently contests the warrant that Engineer A exercised competent professional judgment.
DetailsThis question emerged because the act of inputting confidential client data into a public AI platform without disclosure created a concrete, identifiable harm pathway - exposure of proprietary site information - that a competent engineer should have foreseen before acting. The consequentialist framing sharpens the question by demanding that Engineer A's pre-action risk calculus be evaluated, making the absence of prior consent not merely a procedural failure but evidence of a foreseeable harm that was not adequately weighed.
DetailsThis question emerged from the structural gap between Engineer A's undisclosed AI use and the consent framework that should have governed it, forcing a counterfactual analysis of what disclosure would have required. The question crystallizes because it contests whether client consent functions as a veto right over engineering tool selection or as a trigger for the engineer's obligation to find compliant alternatives, with the answer materially affecting both the deliverables and the professional responsibility allocation.
DetailsThis question emerged because the contrast between Engineer A's thorough review of the report and cursory review of the design documents created an observable differential in diligence that directly maps onto the discovered defects, making the review standard the contested variable. The question forces a determination of whether Responsible Charge is satisfied by a uniform thoroughness standard or whether AI-generated safety-critical documents require a qualitatively distinct verification methodology.
DetailsThis question emerged because Engineer B's retirement created a structural discontinuity in Engineer A's professional support system that coincided with the adoption of AI tools, raising the question of whether the resulting over-reliance was a foreseeable systemic risk that professional codes should proactively address. The question contests whether the NSPE Code of Ethics operates only at the level of individual obligation or whether it carries a systemic responsibility to anticipate and mitigate structural vulnerabilities like mentorship succession gaps in an era of AI-assisted practice.
DetailsThis question emerged because Client W's observation of stylistic inconsistency created an evidentiary gap: without disclosure, the anomaly is unexplained and potentially alarming, but with explicit AI citation, it is unclear whether a professional audience would interpret the attribution as reassuring transparency or as a signal that the engineer substituted AI judgment for independent professional authorship. The question therefore sits at the intersection of the AI Tool Disclosure Obligation, the Intellectual Authorship Integrity Obligation, and the Responsible Charge Active Review Obligation, where each warrant points toward a different prediction about whether disclosure would raise or resolve the reliability concern.
Detailsresolution pattern 28
The board reached a split conclusion by anchoring its analysis on the quality and depth of Engineer A's post-AI review rather than on AI use itself: because Engineer A exercised genuine professional judgment over the report through thorough verification, that use was ethical, but because Engineer A applied only a superficial review to the design documents - allowing safety omissions and dimensional errors to pass through to submission - that use fell below the responsible charge standard and was therefore unethical.
DetailsThe board concluded that AI-assisted drafting is not inherently unethical because the Code's obligations attach to the engineer's professional judgment and accountability over the final work product, not to the mechanism of initial drafting - meaning that so long as an engineer exercises competent, meaningful review, the use of AI as a drafting aid does not itself violate any Code provision.
DetailsThe board concluded there is no freestanding ethical obligation to disclose AI use to a client because engineering accountability runs through the professional seal and responsible charge framework rather than through authorship attribution, and because no Code provision imposes a tool-disclosure duty absent a contractual requirement - meaning silence about AI use is not itself deceptive unless paired with an affirmative misrepresentation.
DetailsThe board identified a self-standing ethical violation under Code provision II.1.c because Engineer A's act of uploading confidential client data to an open-source platform without prior consent created a foreseeable and uncontrolled risk of disclosure - a breach that exists entirely apart from whether the resulting report was accurate or whether AI use was disclosed, because the confidentiality duty is triggered at the moment of unauthorized data exposure, not at the moment of harm.
DetailsThe board concluded that C2's permissive finding about AI drafting tools must be read as conditional rather than categorical: because ethical AI tool use implicitly assumes the engineer understands the tool's limitations and applies verification rigor sufficient to maintain genuine responsible charge, and because Engineer A lacked both prior experience with this newly released tool and applied only a superficial review, the preconditions for ethical permissibility were absent - making Engineer A's design document use of AI an independent competence violation under Code provisions I.2 and II.2.a.
DetailsThe board resolved the disclosure question by rejecting a blanket rule in either direction: Engineer A had no universal obligation to disclose AI use, but that general conclusion was overridden by the specific facts of this case, where Client W's direct observation of stylistic inconsistency created a discrete moment at which continued silence constituted a deceptive act of omission implicating Code provisions I.5 and III.3, and where the defective design documents demonstrated that undisclosed AI outputs had materially reached the client.
DetailsThe board resolved this question by finding that Engineer A's ethical failure was not merely in how AI was used but in the prior decision to use it as a peer review substitute at all: Code provision II.2.a's competence obligation, read alongside the responsible charge standard, imposed an affirmative duty to secure alternative qualified oversight before undertaking a complex dual-scope engagement, and the board further noted that the NSPE Code's silence on peer review succession planning represents a gap the profession should address.
DetailsThe board resolved the design document ethics question by elevating it beyond a procedural review lapse: because Engineer A sealed documents containing regulatory safety omissions after only a cursory review, the violation implicated not only Code provisions II.2.b and III.8.a regarding sealing and registration law compliance, but also the paramount public safety obligation of I.1, making this a failure of the core public protection function that professional licensure exists to serve rather than a mere technical oversight.
DetailsThe board resolved the citation and attribution question by reading Code provision III.9 expansively: beyond preventing credit-theft, it carries an affirmative dimension requiring that a technical report's intellectual sources - including AI-generated synthesis and the journal articles used to verify it - be disclosed so that regulators, future engineers, and legal proceedings can assess the quality and reliability of the underlying analysis, a concern made especially acute by the report's potential role in remediation planning for a contested emerging contaminant.
DetailsThe board resolved the confidentiality question by establishing that Engineer A committed a discrete, self-contained ethical violation under Code provision II.1.c the moment confidential client data was transmitted to a public AI platform without prior consent, and that a competent engineer deploying any novel third-party tool with client data bears an affirmative pre-use obligation to investigate that tool's data handling policies and obtain explicit client authorization - obligations Engineer A fulfilled neither before nor after the upload.
DetailsThe board concluded that Engineer A violated the competence standard under I.2 and II.2.a because qualification for a complex engagement includes maintaining the professional infrastructure - such as peer review - necessary to deliver adequate work, and Engineer A's substitution of an unverified AI tool for that infrastructure, without seeking any alternative qualified oversight, compounded every subsequent deficiency in the deliverables.
DetailsThe board concluded that Engineer A's silence when Client W identified the stylistic inconsistency constituted a deceptive act under I.5 and conduct that deceives under III.3, because the moment of Client W's observation created an independent, context-specific obligation to disclose AI authorship, and Engineer A's failure to respond honestly allowed Client W to proceed under a materially false impression about the report's professional origin.
DetailsThe board concluded that Engineer A's failure to cite the journal articles used to verify AI-generated content violated the credit-giving obligation under III.9 and additionally undermined the evidentiary foundation of the report, because the omission deprived Client W, regulators, and subsequent reviewers of the ability to independently assess whether the cross-checking process was adequate, and created foreseeable risk that AI-hallucinated or outdated information could go undetected by those relying on the report's apparent professional authority.
DetailsThe board concluded that Engineer A's thorough factual verification of AI-generated report text was sufficient to render that use of AI ethical under the competence standard, but acknowledged that this conclusion does not resolve the deeper authorship integrity question, because verification confirms accuracy without transforming AI-generated prose into the engineer's own professional expression, and the tension between these two principles remains genuinely unresolved absent an explicit professional framework for disclosed AI-assisted authorship.
DetailsThe board concluded that Engineer A violated II.2.b by affixing their professional seal to AI-generated design documents after only a cursory review, because the seal certifies responsible charge over work whose generative process Engineer A did not fully understand, and the subsequent discovery of misaligned dimensions and omitted safety features required by local regulations provided concrete confirmation that the review was substantively inadequate to satisfy either the responsible charge or the competence standard.
DetailsThe Board resolved Q11 by qualifying its general conclusion that disclosure is not universally required: it held that the public welfare paramount principle (I.1) affirmatively requires disclosure when AI tools are unfamiliar, unvalidated, and demonstrably capable of generating safety-critical omissions that cursory review fails to catch, because disclosure enables clients and downstream reviewers to apply appropriate scrutiny to outputs whose reliability has not been professionally established.
DetailsThe Board resolved Q12 by applying Kantian deontological analysis: because the maxim underlying Engineer A's conduct - submitting AI-generated work without disclosure provided outputs are verified - cannot be universalized without destroying the trust function of the professional seal, and because the duty of candor is not contingent on harm, Engineer A's silence (especially when directly prompted by Client W's observation) constituted a breach of the duty of candor regardless of the final work product's accuracy.
DetailsThe Board resolved Q13 by holding that Engineer A categorically breached the Responsible Charge duty the moment they affixed their seal to documents they had only cursorily reviewed and did not fully understand, because II.2.b prohibits sealing documents in subject matter where competence is lacking, and Engineer A's unfamiliarity with the AI tool's outputs combined with a review process that failed to detect regulatory non-compliance established that the competence threshold was not met at the time of sealing.
DetailsThe Board resolved Q14 by applying consequentialist expected-value analysis: because a competent engineer could foresee that deploying an unfamiliar AI tool with only cursory review for safety-critical design work would produce a meaningful probability of undetected errors, the expected value of that decision was negative at the time it was made, and the actual outcome - regulatory non-compliance and safety omissions - confirmed rather than created that ethical judgment, rendering the decision unethical regardless of the fact that errors were caught before construction.
DetailsThe Board resolved Q15 by holding that a person of practical wisdom, confronted with recognized writing limitations and the loss of a peer reviewer, would have sought transparent solutions - peer review, client disclosure, or explicit AI attribution - rather than preserving the appearance of unassisted authorship, and that Engineer A's failure to do so, especially when directly prompted by Client W's observation, constituted a failure of intellectual honesty and integrity that is not remediated by the report's factual accuracy.
DetailsThe board concluded that Engineer A failed to exhibit prudence and professional humility because the decision to deploy an unfamiliar AI tool as a substitute for Engineer B's expert review compounded rather than mitigated professional risk, and a prudent engineer would instead have sought an alternative peer reviewer, disclosed the limitation to Client W, or scoped the engagement to match verified capabilities.
DetailsThe board concluded that Engineer A's decision to upload Client W's confidential data to an open-source AI platform without prior consent was a consequentialist failure because the foreseeable risk of regulatory exposure, competitive harm, and reputational damage to Client W clearly outweighed the drafting efficiency gained, and Engineer A's ignorance of the platform's data handling practices reinforced rather than mitigated this conclusion.
DetailsThe board concluded through counterfactual analysis that had Engineer A followed a disclosure-and-consent pathway, the ethical obligation would have been to identify a privacy-compliant AI alternative or proceed without AI assistance rather than to decline the engagement entirely, and this counterfactual clarifies that the board's non-condemnation of AI use per se is conditional on appropriate consent frameworks and tool selection.
DetailsThe board concluded through counterfactual analysis that a rigorous line-by-line review of the AI-generated design documents would very likely have identified the safety omissions and dimensional errors before submission, and that such a review would have substantially satisfied the Responsible Charge standard, thereby reinforcing that the ethical failure was the decision to apply a cursory rather than rigorous review standard to safety-critical outputs from an untested tool.
DetailsThe board concluded through counterfactual analysis that explicit citation of AI use in the report would have resolved Client W's authorship concern by providing a transparent framework for the stylistic inconsistency, while simultaneously - and consequentially - inviting scrutiny of the AI tool's data handling practices and potentially surfacing the confidentiality violation, suggesting that the ethical imperative for disclosure is stronger than a mere procedural recommendation.
DetailsThe Board concluded that Engineer A's rigorous fact-checking was professionally adequate for competence purposes but substantively failed to resolve whether sealing AI-generated prose with only minor edits - without disclosure - constituted an implicit misrepresentation of authorship; by leaving this tension unresolved, the Board implicitly signaled that satisfying competence review does not extinguish the separate duty of intellectual honesty, and that a fully ethical resolution required either disclosure of the AI's generative role or a complete rewrite in the engineer's own voice.
DetailsThe Board concluded against Engineer A on the design document question because the professional seal certifies not merely that output was reviewed but that the engineer exercised personal, informed judgment over the generative process itself; since Engineer A lacked the familiarity with the AI tool necessary to evaluate how it produced its output, no high-level scan could close the epistemic gap, and the resulting safety omissions and dimensional errors confirmed that the public welfare obligation had been materially compromised by inadequate oversight of an unfamiliar generative system.
DetailsThe Board concluded that Engineer A's confidentiality breach was not merely a procedural lapse but a foreseeable consequence of an inadequately structured professional practice, because the loss of Engineer B's peer review created a quality assurance gap that Engineer A attempted to fill with an open-source AI tool - a substitution that made the confidentiality violation structurally inevitable; the Board held that the principle of Mentorship Continuity and Succession Planning, read alongside Code provision II.1.c, imposed an obligation to identify a compliant replacement oversight mechanism before accepting work that could not be competently and confidentially performed alone.
DetailsPhase 3: Decision Points
canonical decision point 18
When Engineer A submitted the AI-generated environmental report to Client W under a professional seal - and when Client W directly observed that the report appeared to have been written by two different authors - did Engineer A have an ethical obligation to proactively disclose the AI's generative role, and did silence in that moment constitute a deceptive act?
DetailsWhen Engineer A used a newly released, unfamiliar AI-assisted drafting tool to generate engineering design documents and then conducted only a cursory high-level review before applying a professional seal, did Engineer A satisfy the Responsible Charge standard and the competence obligation under Code provisions I.2, II.2.a, and II.2.b, given that the resulting documents contained misaligned dimensions and omitted safety features required by local regulations?
DetailsDid Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, and does this violation stand as a separate ethical breach from any question about AI disclosure, report quality, or design document adequacy?
DetailsWhen Client W observed that the report appeared to have been written by two different authors, and given that AI generated substantial portions of the report text, did Engineer A have an ethical obligation to proactively disclose the AI's generative role - either before submission or at the moment of Client W's inquiry - or was silence permissible under the analogy to conventional engineering software?
DetailsBefore inputting Client W's confidential site data and groundwater monitoring information into an open-source AI platform, did Engineer A have an independent ethical obligation to investigate the platform's data handling practices and obtain Client W's prior informed consent - and does the failure to do so constitute a discrete, self-standing breach of Code provision II.1.c separate from any question about report quality or AI disclosure?
DetailsAfter Engineer B's retirement removed the primary quality assurance mechanism Engineer A had relied upon, and given Engineer A's unfamiliarity with the newly released AI drafting tool, did Engineer A satisfy the Responsible Charge standard by applying only a cursory high-level review before sealing AI-generated design documents - or did the combination of tool novelty, peer review absence, and safety-critical scope require either a rigorous independent technical review or an alternative peer review arrangement before proceeding?
DetailsWhat standard of review must Engineer A apply to AI-generated design documents before affixing a professional seal, given unfamiliarity with the AI drafting tool and the safety-critical nature of the outputs?
DetailsWhen Client W directly observed that the report appeared to have been written by two different authors - an observation that accurately described the report's AI-generated and human-authored sections - did Engineer A incur an immediate, affirmative obligation to disclose the AI's generative role, and does silence in that moment constitute a deceptive act under Code provisions I.5 and III.3?
DetailsBefore inputting Client W's confidential site data and groundwater monitoring information into an open-source AI platform to assist with report drafting, did Engineer A have an independent, affirmative obligation to investigate the platform's data handling practices and obtain Client W's prior informed consent - and does the failure to do so constitute a discrete ethical violation under Code provision II.1.c separate from any question about report quality or AI disclosure?
DetailsWhat standard of review should Engineer A apply to AI-generated engineering design documents before affixing their professional seal, given their unfamiliarity with the AI drafting tool and the safety-critical nature of the outputs?
DetailsWhen Client W directly observed that the environmental report appeared to have been written by two different authors - an observation that accurately reflected the report's AI-generated and human-authored sections - did Engineer A incur an immediate ethical obligation to disclose the AI's generative role, or was silence in that moment ethically permissible given the absence of a universal disclosure requirement?
DetailsAfter Engineer B's retirement removed Engineer A's primary quality assurance mechanism, did Engineer A have an independent ethical obligation to arrange a functionally equivalent alternative peer review process before undertaking a complex dual-scope engagement - and did the decision to substitute an open-source AI tool for that oversight independently violate the client data confidentiality obligation by necessarily exposing Client W's proprietary site data to a public platform without prior consent?
DetailsGiven that Engineer A used a novel, unfamiliar AI drafting tool to generate engineering design documents and then applied their professional seal after only a cursory high-level review - which failed to detect misaligned dimensions and omitted safety features required by local regulations - what standard of review was ethically required to satisfy Responsible Charge and the competence obligation under Code provisions I.2, II.2.a, and II.2.b?
DetailsWhen Engineer B's retirement removed the primary quality assurance mechanism Engineer A had structurally relied upon, and Engineer A then faced a complex dual-scope engagement involving an unfamiliar AI tool, did Engineer A have an independent ethical obligation to arrange a functionally equivalent alternative peer review process before proceeding - rather than substituting AI-generated output for that professional oversight?
DetailsBy uploading Client W's confidential site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c, and what affirmative steps were required before inputting confidential client data into any novel third-party AI system?
DetailsGiven that Engineer B's retirement removed Engineer A's primary quality assurance mechanism, and that Engineer A had no prior experience with the newly released AI drafting tool, what standard of review was required before affixing a professional seal to the AI-generated engineering design documents?
DetailsWhen Client W observed that the environmental report appeared to have been written by two different authors - an observation that accurately reflected the report's dual-origin nature - did Engineer A incur an immediate ethical obligation to disclose the AI's generative role, and does silence in that moment constitute a deceptive act under Code provisions I.5 and III.3, independent of whether disclosure was required before submission?
DetailsDid Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, and does this violation stand as a discrete ethical breach separate from any question about report quality, AI disclosure, or design document accuracy?
DetailsPhase 4: Narrative Elements
Characters 8
Timeline Events 35 -- synthesized from Step 3 temporal dynamics
The case centers on an engineering firm where AI-generated design documents and reports were produced under conditions that did not meet state engineering standards and regulations. This foundational context sets the stage for a series of professional and ethical decisions that would ultimately raise serious questions about competence, transparency, and public safety.
The engineer made a deliberate decision to use an AI tool to assist in drafting a professional engineering report, rather than relying solely on traditional methods. This choice introduced new risks around accountability and professional responsibility, as the engineer retained full legal and ethical obligation for the accuracy of the final work product.
In the process of using the AI tool, the engineer entered sensitive and proprietary client data into a publicly accessible AI platform not approved for confidential information. This action potentially exposed protected client information to unauthorized parties, constituting a serious breach of professional confidentiality obligations.
Before submission, the engineer conducted a careful and comprehensive review of the AI-generated report to verify its technical accuracy and completeness. This diligent review represented a critical step in exercising professional judgment and fulfilling the engineer's duty to ensure the integrity of work bearing their seal.
The engineer submitted the completed report to the client without disclosing that AI tools had been used in its preparation. This omission raised significant ethical concerns regarding transparency and honesty, as clients and regulatory bodies may have a legitimate interest in knowing how engineering work products are generated.
The engineer extended their use of AI beyond report writing by also employing it to generate formal engineering design documents. This escalation increased the ethical and legal stakes considerably, as design documents carry direct implications for public health, safety, and welfare.
Unlike the thorough review applied to the report, the engineer performed only a superficial review of the AI-generated design documents before approving them. This cursory oversight failed to meet the standard of care expected of a licensed professional engineer and left potentially critical errors undetected.
Engineer B, a senior colleague who may have provided oversight or mentorship within the firm, retired during this period. This departure is significant because it may have removed an experienced check on the engineer's work, potentially contributing to the lapse in professional standards that followed.
Client W Engagement Established
Confidential Data Exposed to AI
AI Report Draft Generated
AI Design Documents Generated
Report Stylistic Inconsistency Detected
Design Document Defects Discovered
Tension between AI Tool Disclosure Obligation Breached By Engineer A In Report Submission To Client W and AI-Generated Work Product Disclosure Constraint Engineer A Report Submission
Tension between AI Tool Disclosure Obligation Breached By Engineer A In Design Document Submission To Client W and Competence Assurance Under Novel Tool Adoption Applied to AI Drafting Tool
Did Engineer A have an ethical obligation to disclose the AI's generative role in drafting the environmental report to Client W — both at submission and upon Client W's direct observation of stylistic inconsistency — and does submitting AI-generated prose with only minor wording edits under a professional seal without attribution constitute a breach of intellectual authorship integrity and candor?
Did Engineer A satisfy the Responsible Charge and competence standards by conducting only a cursory, high-level review of AI-generated design documents produced by a novel, unfamiliar drafting tool before affixing a professional seal, given that the review failed to detect misaligned dimensions and safety features required by local regulations?
Did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's proprietary site data and groundwater monitoring information into an open-source AI platform without obtaining prior consent, and does this breach stand as a self-contained ethical violation regardless of the accuracy or quality of the resulting work products?
Did Engineer A have an ethical obligation to proactively disclose the use of AI tools to Client W when submitting AI-generated work products, and did silence in the face of Client W's direct observation about stylistic inconsistency constitute a deceptive act?
Did Engineer A independently violate the client confidentiality obligation under Code provision II.1.c by uploading Client W's confidential site data and groundwater monitoring information to an open-source AI platform without obtaining prior consent, and does this constitute a discrete ethical breach separate from any question about AI disclosure or work product quality?
Did Engineer A satisfy the Responsible Charge standard and competence obligation under Code provisions II.2.a and II.2.b by applying only a cursory, high-level review to AI-generated engineering design documents before affixing their professional seal, given that the documents contained safety omissions and dimensional errors that the review failed to detect?
Should Engineer A fulfill the Intellectual Authorship Integrity Obligation and the AI-Assisted Design Comprehensive Verification Obligation by conducting thorough, proportionate review of AI-generated work products before sealing and submitting them to Client W, given that the report received a thorough review while the design documents received only a cursory high-level check?
Should Engineer A fulfill the Proactive AI Disclosure to Client Obligation by disclosing the use of AI tools to Client W — particularly at the moment Client W directly observed that the report appeared to have been written by two different authors — or does silence in that moment constitute a deceptive act under Code provisions I.5 and III.3?
Should Engineer A fulfill the Client Data Confidentiality Obligation and the Peer Review Succession Obligation by obtaining Client W's prior consent before uploading confidential site data to an open-source AI platform, and by arranging an alternative qualified peer review mechanism to replace Engineer B's oversight before undertaking a complex dual-scope engagement — rather than substituting an unfamiliar open-source AI tool for both functions?
Should Engineer B (as the mentor/quality assurance figure whose retirement precipitated Engineer A's AI over-reliance) have fulfilled the Responsible Charge Active Review Obligation and AI-Generated Work Product Competence Verification Obligation by ensuring continuity of oversight before retiring, and does Engineer A's subsequent cursory review of AI-generated design documents constitute a categorical breach of responsible charge?
Should Engineer B (as the departing mentor) have fulfilled the Mentorship Succession and Peer Review Continuity Obligation by arranging or facilitating alternative peer review mechanisms for Engineer A before retiring, and does Engineer A bear an independent obligation to arrange such alternatives rather than substituting an unfamiliar AI tool for professional oversight?
Should Engineer B (as the quality assurance anchor for Engineer A's practice) have fulfilled the AI-Generated Work Product Competence Verification Obligation and Regulatory Compliance Verification Obligation by ensuring Engineer A possessed sufficient competence with the AI tool and applied adequate verification rigor before sealing outputs, and does Engineer A's failure to do so — combined with silence when Client W identified stylistic inconsistency — constitute independent ethical violations of candor and competence?
Should Engineer A conduct a rigorous, line-by-line technical review of AI-generated design documents sufficient to detect safety omissions and dimensional errors before affixing a professional seal, rather than relying on a cursory high-level check?
Should Engineer A verify sufficient competence with a novel AI drafting tool and disclose its use to Client W — particularly when client-observable anomalies arise and when confidential client data is necessarily transmitted to a public platform — as preconditions for ethically permissible AI-assisted work product submission?
Should Engineer A arrange a functionally equivalent alternative peer review mechanism — and select a confidentiality-compliant AI tool — before undertaking a complex dual-scope engagement after losing the primary quality assurance resource provided by Engineer B, rather than substituting an unfamiliar open-source AI tool for that professional oversight?
Should Engineer A apply a rigorous, line-by-line technical review to AI-generated work products before affixing a professional seal, or is a high-level cursory review sufficient to satisfy the Responsible Charge standard when AI-assisted drafting tools are used?
Should Engineer A have assessed their own competence with a novel AI drafting tool — including its capabilities, limitations, and failure modes — before deploying it for safety-critical engineering design documents, or was domain expertise in the subject matter sufficient to satisfy the competence standard for AI-assisted work?
Should Engineer A have obtained Client W's prior informed consent before uploading confidential site data and groundwater monitoring information to an open-source AI platform, and independently arranged alternative peer review after Engineer B's retirement, rather than proceeding without either safeguard?
Engineer A's use of AI in report writing was partly ethical, and partly unethical.
Ethical Tensions 16
Decision Moments 18
- Disclose AI tool's generative role in the report to Client W at submission and clarify AI authorship when Client W raises the stylistic inconsistency observation board choice
- Submit the AI-generated report under professional seal without disclosing AI involvement and remain silent when Client W observes the stylistic inconsistency
- Conduct a rigorous, line-by-line technical review of all AI-generated design documents — verifying each dimension, safety feature, and regulatory compliance requirement — before affixing a professional seal, and arrange alternative qualified peer review to compensate for Engineer B's absence board choice
- Seal and submit AI-generated design documents after only a cursory high-level review, relying on the AI tool's output without independent verification of dimensions, safety features, or regulatory compliance
- Obtain Client W's prior informed consent before uploading confidential site data to the open-source AI platform, and investigate the platform's data handling and privacy policies before any client data transmission board choice
- Upload Client W's confidential site data and groundwater monitoring information into the open-source AI platform without obtaining prior consent or investigating the platform's data handling practices
- Proactively disclose AI tool usage and AI-generated sections to Client W before or upon submission, and clarify AI's role when Client W raises the stylistic inconsistency observation board choice
- Submit AI-generated work products without disclosure and remain silent when Client W observes the stylistic inconsistency, treating AI as an internal drafting tool equivalent to other engineering software
- Investigate the open-source AI platform's data handling and privacy policies before use, obtain Client W's explicit prior consent for uploading confidential site data, and identify a privacy-compliant alternative if consent is withheld board choice
- Upload Client W's confidential site data and groundwater monitoring information to the open-source AI platform without prior investigation of data handling practices and without obtaining Client W's consent
- Conduct a rigorous, line-by-line technical review of all AI-generated design documents — verifying each dimension against site survey data, each specification against local regulatory requirements, and confirming the presence of all required safety features — before affixing the professional seal board choice
- Apply a cursory, high-level review to AI-generated design documents and affix the professional seal without verifying dimensional accuracy, regulatory compliance, or the presence of required safety features
- Conduct rigorous, line-by-line technical verification of all AI-generated work products — proportionate to tool novelty and safety-criticality — before affixing professional seal, and attribute AI generative contributions in the work product board choice
- Apply a high-level cursory review to AI-generated design documents and seal them without attribution, treating AI output as equivalent to conventional engineering software output
- Proactively disclose AI tool usage and identify AI-generated sections to Client W — particularly upon Client W's direct observation of stylistic inconsistency — and attribute AI generative contributions in both the report and design documents board choice
- Remain silent about AI tool usage when Client W raises the stylistic inconsistency observation, treating AI as an undisclosed internal drafting mechanism equivalent to conventional engineering software
- Obtain Client W's prior informed consent before uploading confidential site data to any third-party AI platform, investigate the platform's data handling and privacy policies before use, and arrange an alternative qualified peer reviewer or privacy-compliant AI tool to replace Engineer B's oversight before accepting the dual-scope engagement board choice
- Upload confidential client data to the open-source AI platform without prior consent and proceed with the engagement using the AI tool as a substitute for Engineer B's peer review oversight, treating the efficiency benefit as sufficient justification
- Conduct rigorous line-by-line technical review of all AI-generated design documents, verifying each dimension against site survey data and each specification against local regulatory requirements, before affixing professional seal board choice
- Apply cursory high-level review of AI-generated design documents and affix professional seal without verifying regulatory compliance or dimensional accuracy against site-specific requirements
- Arrange alternative qualified peer review mechanism (qualified colleague, professional review service, or subconsultant) before accepting the dual-scope engagement following Engineer B's retirement board choice
- Proceed with the engagement by substituting a newly released open-source AI tool for Engineer B's expert review without arranging any alternative human oversight mechanism
- Disclose AI tool's generative role to Client W when Client W raises the stylistic inconsistency observation, cite journal articles used to cross-check AI content, and verify all AI-generated design outputs against local regulatory requirements before sealing board choice
- Remain silent about AI's generative role when Client W raises the stylistic inconsistency, omit citations to verification sources, and seal design documents after cursory review without verifying regulatory compliance
- Conduct rigorous line-by-line technical review of AI-generated design documents verifying each dimension, specification, and safety feature against site data and local regulatory requirements before sealing board choice
- Perform cursory high-level review of AI-generated design documents and affix professional seal without verifying individual dimensions, specifications, or regulatory safety feature compliance
- Verify competence with the AI tool before deployment, disclose AI use to Client W when client-observable anomalies arise or safety-critical outputs are involved, and cite sources used to cross-check AI-generated content board choice
- Deploy novel AI tool without prior competence verification, remain silent about AI authorship when client raises stylistic concerns, and submit work products without attribution or citation of cross-checking sources
- Arrange alternative qualified peer review before accepting the engagement, select a privacy-compliant AI tool with contractual data protection guarantees or obtain Client W's explicit consent before uploading confidential data, and scope the engagement to match verified professional infrastructure board choice
- Proceed with the engagement by substituting an unfamiliar open-source AI tool for Engineer B's peer review function and upload confidential client data to the public platform without obtaining prior consent or investigating its data handling practices
- Apply rigorous, line-by-line technical review of all AI-generated work products before sealing, verifying each dimension, specification, and safety feature against regulatory requirements and site data board choice
- Conduct a high-level cursory review of AI-generated design documents before sealing, relying on the AI tool's output quality without independently verifying each technical element
- Assess competence with the novel AI tool before deployment, investigate its capabilities and failure modes, and arrange alternative qualified peer review to compensate for the loss of Engineer B's oversight before undertaking the engagement board choice
- Deploy the novel AI drafting tool relying on existing domain expertise in groundwater infrastructure design as sufficient competence, without separately investigating the tool's limitations or arranging alternative peer review
- Obtain Client W's prior informed consent before uploading any confidential site data to the AI platform, investigate the platform's data handling and privacy policies, and arrange alternative qualified peer review to replace Engineer B's oversight function before accepting the engagement board choice
- Upload confidential client data to the open-source AI platform without prior consent and proceed without arranging alternative peer review, relying on the AI tool as a substitute for Engineer B's quality assurance function