Step 4: Review
Review extracted entities and commit to OntServe
Commit to OntServe
Phase 2A: Code Provisions
code provision reference 5
Hold paramount the safety, health, and welfare of the public.
DetailsEngineers shall hold paramount the safety, health, and welfare of the public.
DetailsEngineers shall approve only those engineering documents that are in conformity with applicable standards.
DetailsEngineers may express publicly technical opinions that are founded upon knowledge of the facts and competence in the subject matter.
DetailsEngineers shall advise their clients or employers when they believe a project will not be successful.
DetailsPhase 2B: Precedent Cases
precedent case reference 1
The Board cited this case to establish that engineers must balance technical safety obligations against business pressures, and that the overriding ethical responsibility is to hold paramount the safety, health, and welfare of the public. It is used as an analogous precedent for Engineer A's obligations in the autonomous vehicle context.
DetailsPhase 2C: Questions & Conclusions
ethical conclusion 18
That being said, to address the specific question posed in the case, Engineer A has an obligation to state that the prime ethical obligation of the vehicle operation is to minimize harm to affect the least number of persons.
DetailsBeyond the Board's finding that Engineer A must recommend minimizing harm to the least number of persons, Engineer A bears an additional obligation to explicitly disclose to the automobile manufacturer that this recommendation is grounded in a utilitarian ethical framework rather than in any established regulatory or industry standard. Because no applicable national or industry standards governing autonomous vehicle harm-allocation decision logic currently exist, Engineer A cannot represent the harm-minimization recommendation as a technically mandated or universally accepted engineering norm. Presenting it as such would violate the completeness and non-selectivity obligation that governs Engineer A's advisory role. Engineer A must therefore clearly communicate to the automobile manufacturer that the recommendation reflects a specific moral philosophy - one that reasonable engineers and ethicists might contest - so that the manufacturer can make a genuinely informed deployment decision. This disclosure obligation is heightened, not relieved, by the regulatory standards vacuum, because the absence of external standards places the full burden of ethical transparency on Engineer A as the professional advisor.
DetailsThe Board's conclusion that Engineer A must recommend harm minimization for the least number of persons does not fully resolve what Engineer A's obligations become if the automobile manufacturer overrides that recommendation and elects to program the vehicle to prioritize passenger safety above third-party welfare. In that scenario, Engineer A's ethical obligations do not terminate upon delivery of the initial recommendation. Engineer A must first pursue graduated internal escalation within the risk assessment team and up the manufacturer's organizational hierarchy, clearly documenting the safety concern and its basis in the public welfare paramount principle. If internal escalation fails to produce a design that Engineer A can professionally certify as consistent with the obligation to hold paramount the safety, health, and welfare of the public - including pedestrians, cyclists, and motorcyclists who are third parties to the client relationship - Engineer A must consider whether continued participation in the project constitutes implicit endorsement of a harm-allocation algorithm that foreseeably causes fatal injury to third parties. At that threshold, refusal to certify the system or withdrawal from the engagement may be required. The consultant relationship does not diminish this obligation; the NSPE Code's public welfare paramount duty applies equally to consultants and employees, and the absence of a direct employment relationship does not reduce the enforceability of Engineer A's professional ethical duties.
DetailsThe Board's harm-minimization conclusion, while sound as a first-order ethical directive, does not adequately account for the possibility that a technically superior mitigation option - such as a sensor-based dynamic crash evaluation system capable of real-time scenario assessment rather than pre-committed algorithmic harm-allocation logic - could dissolve or substantially reduce the binary ethical dilemma between passenger safety and third-party harm minimization. Engineer A's obligation to explore additional technical mitigation options before accepting the dilemma as irreducible is itself an ethical duty, not merely a technical preference. Analogous to the reasoning in BER Case 96-4, where Engineer A was obligated to recommend further study and additional testing before deployment of safety-critical software, Engineer A in the present case must recommend that the risk assessment team investigate whether the harm-allocation decision can be made dynamically rather than pre-committed, thereby potentially achieving better outcomes for all parties across a wider range of crash scenarios. Recommending harm minimization without first exhausting technically feasible alternatives that could reduce the need for any pre-committed harm allocation would itself be an incomplete discharge of Engineer A's professional competence and public welfare obligations. If such alternatives are found to be technically infeasible, Engineer A must document that finding transparently so that the manufacturer's deployment decision is fully informed.
DetailsThe Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly adopts a utilitarian ethical framework - specifically, an aggregate harm-minimization calculus - without acknowledging that this represents one among several defensible moral philosophies rather than a universally accepted engineering standard. A deontological framework, for instance, might prohibit the vehicle from actively redirecting harm toward any third party regardless of aggregate outcome, treating each person's life as inviolable rather than as a unit in a welfare sum. Because Engineer A is advising an automobile manufacturer on a design decision that will be embedded in a consumer product affecting the public, Engineer A has an affirmative obligation under the principle of Completeness and Non-Selectivity in Advisory Opinions to disclose to the manufacturer that the harm-minimization recommendation reflects a specific moral philosophy, that alternative frameworks exist and yield different algorithmic outcomes, and that the selection among them is not a purely technical determination. Failure to make this disclosure would present the manufacturer with an incomplete picture of the decision it is actually making, impairing its ability to give informed consent to the embedded ethical framework and potentially exposing it to legal and reputational consequences it did not knowingly accept.
DetailsIn the absence of applicable regulatory or industry standards governing autonomous vehicle harm-allocation decision logic, Engineer A has an affirmative obligation to recommend that the automobile manufacturer publicly disclose the ethical framework embedded in the vehicle's operating system to prospective consumers before deployment. This obligation arises from the convergence of three independent sources: first, the Public Welfare Paramount principle, which requires that the public be protected not only from physical harm but from material deception about the nature of products that affect their safety; second, the Autonomous System Moral Framework Transparency Obligation, which recognizes that when an algorithm pre-commits to a harm-allocation outcome on behalf of a user who cannot intervene in real time, that user and affected third parties have a legitimate interest in knowing the decision logic governing their fate; and third, the regulatory standards vacuum itself, which - as the Board recognized analogously in BER Case 96-4 - heightens rather than relieves Engineer A's disclosure obligations precisely because no external regulatory body has yet stepped in to mandate transparency. The absence of a legal requirement to disclose does not extinguish the professional ethical duty to recommend disclosure. Engineer A's recommendation should therefore include not only the harm-minimization algorithm design but also a specific advisory that the manufacturer implement pre-sale consumer disclosure of the vehicle's decision logic as a condition of ethically responsible deployment.
DetailsIf the automobile manufacturer, after receiving Engineer A's recommendation to minimize aggregate harm, decides to override that recommendation and program the vehicle to prioritize passenger safety above all others, Engineer A's ethical obligations do not terminate at the point of initial recommendation. Engineer A retains at minimum three residual obligations. First, under the principle of Graduated Internal Escalation Before External Reporting, Engineer A must formally document the disagreement and communicate to the manufacturer's decision-makers - in writing - that the passenger-priority algorithm creates a foreseeable risk of fatal harm to third parties that Engineer A regards as ethically unjustifiable, ensuring that the override decision is made with full awareness of its consequences rather than by default or inattention. Second, Engineer A must assess whether the resulting system design crosses the threshold from a debatable design choice into a design that Engineer A cannot in good conscience certify as safe for public deployment; if it does, Engineer A must decline to approve or certify the system under Code provision II.1.b., which prohibits approval of engineering documents not in conformity with sound engineering principles protective of public safety. Third, if internal escalation fails and Engineer A concludes that deployment of the passenger-priority algorithm poses an unreasonable risk of fatal harm to identifiable third-party classes - pedestrians, cyclists, motorcycle riders - Engineer A must evaluate whether external reporting obligations are triggered, recognizing that the NSPE Code's public welfare paramount obligation is not discharged merely by voicing concern internally when that concern is overridden and the harmful design proceeds.
DetailsEngineer A's role as a consultant rather than a direct employee does not diminish the substantive scope of his ethical obligations under the NSPE Code, but it does affect the procedural mechanisms available to discharge them. The Code's public welfare paramount obligation applies with equal force to consultants and employees; Engineer A cannot invoke the consultant relationship as a basis for providing a narrower or more deferential safety assessment than an employee engineer would be required to provide. However, the consultant relationship does affect how far Engineer A must press concerns before his professional duty is satisfied in one specific respect: a consultant who has formally documented a safety concern, communicated it clearly to the client's responsible decision-makers, and been overruled has discharged the internal escalation component of his obligation more rapidly than an employee embedded in a hierarchical organization with multiple escalation tiers. The consultant's professional independence - which is itself a resource that the client engaged - means that Engineer A's obligation to provide an honest, complete, and unvarnished assessment of third-party harm risks is if anything stronger than that of an employee who might face internal organizational pressure to soften findings. Accordingly, Engineer A's consultant status heightens the independence and completeness obligations while compressing the internal escalation sequence, and does not create any basis for a reduced or qualified duty of care toward third-party public safety.
DetailsThe tension between the Faithful Agent Obligation - requiring Engineer A to serve the automobile manufacturer's interests - and the Third-Party Non-Client Welfare Consideration is real but resolvable within the NSPE Code's hierarchy of obligations. The Code does not treat these duties as co-equal: the public welfare paramount obligation is explicitly primary, and the faithful agent duty operates only within the ethical limits that the paramount obligation defines. This means that when the manufacturer's commercial interest in a passenger-protective algorithm conflicts with the safety of pedestrians, cyclists, and motorcycle riders, Engineer A is not required to balance these interests as if they were of equal weight. Instead, Engineer A must first satisfy the third-party safety obligation - by recommending the harm-minimization approach - and may then, within that constraint, seek to serve the manufacturer's interests by identifying technical solutions that minimize passenger harm within the harm-minimization framework. The faithful agent obligation does not authorize Engineer A to recommend a design that foreseeably causes fatal harm to third parties in order to protect the manufacturer's commercial position. What it does require is that Engineer A present the harm-minimization recommendation in a manner that is constructive, professionally grounded, and attentive to the manufacturer's legitimate interests in developing a commercially viable and legally defensible product - not that Engineer A suppress or soften the recommendation to accommodate those interests.
DetailsFrom a deontological perspective, Engineer A has an obligation that is stronger than - and not fully captured by - the Board's utilitarian harm-minimization conclusion. The categorical imperative, applied to the autonomous vehicle harm-allocation problem, yields a distinct constraint: Engineer A must not recommend a design that treats any class of persons - whether passengers or third parties - as mere instruments for the benefit of another class. A passenger-priority algorithm that systematically redirects lethal force toward pedestrians treats pedestrians as means to passenger safety ends, which a Kantian analysis would prohibit regardless of aggregate welfare outcomes. Conversely, a pure harm-minimization algorithm that in specific scenarios sacrifices a single passenger to save multiple pedestrians may itself treat the passenger as a means to aggregate welfare ends. The deontological implication for Engineer A is not simply to recommend harm minimization, but to recommend that the design team explore whether any algorithm can be constructed that avoids pre-committing to the instrumental use of any person's life - for example, by designing for crash avoidance rather than crash outcome optimization, or by ensuring that the system's decision logic does not systematically disadvantage any identifiable class. Engineer A's obligation under this framework includes flagging to the manufacturer that the entire framing of the harm-allocation problem as a binary choice between passenger priority and aggregate minimization may itself embed morally problematic assumptions that warrant further study before deployment.
DetailsFrom a virtue ethics standpoint, Engineer A demonstrates the professional integrity and moral courage required of a virtuous engineer precisely by actively and unambiguously expressing concerns about harm-allocation algorithms within the risk assessment team, even when facing commercial pressure to prioritize passenger safety. Virtue ethics evaluates not only the content of Engineer A's recommendation but the manner and disposition with which it is made. A virtuous engineer in Engineer A's position would not merely file a technically correct recommendation and withdraw; he would engage substantively with the team's deliberations, articulate the moral stakes of the design decision in terms accessible to non-engineer stakeholders, and persist in raising concerns through appropriate channels if the initial recommendation is dismissed. The virtue of practical wisdom - phronesis - is particularly relevant here: it requires Engineer A to recognize that the harm-allocation problem is not purely technical, that the risk assessment team's composition and mandate may not be adequate to resolve the embedded ethical questions, and that recommending further interdisciplinary study before deployment is itself an expression of professional integrity rather than a failure to provide a definitive answer. A virtuous engineer does not manufacture false certainty about genuinely contested moral questions in order to satisfy a client's desire for a clean recommendation.
DetailsIf Engineer A had remained silent or provided only a partial assessment of third-party harm risks within the risk assessment team, the automobile manufacturer would not have had sufficient information to make an ethically informed deployment decision, and Engineer A's silence would have constituted a violation of both the faithful agent obligation and the public welfare paramount obligation. The faithful agent obligation requires Engineer A to provide the manufacturer with complete, accurate, and professionally grounded information relevant to the design decision - including information that is commercially inconvenient. Partial disclosure that omits the third-party harm implications of a passenger-priority algorithm would deprive the manufacturer of the ability to make an informed choice about the ethical and legal risks it is assuming. Simultaneously, Engineer A's silence would violate the public welfare paramount obligation by allowing a design to proceed toward deployment without the safety concerns having been formally raised, documented, and considered. The Code provision at III.1.b. - requiring engineers to advise clients when a project will not be successful - applies by analogy: a harm-allocation algorithm that foreseeably causes fatal harm to third parties in a predictable class of scenarios is not a successful engineering outcome, and Engineer A is obligated to say so. Silence in the face of a known, foreseeable, and serious public safety risk is not a neutral act under the NSPE Code; it is a breach of the engineer's professional duty.
DetailsIf the automobile manufacturer had already established a firm design policy prioritizing passenger safety above all third-party considerations before Engineer A joined the risk assessment team, Engineer A's ethical obligations would shift materially - from recommendation toward escalation and, if necessary, refusal to certify. Under these circumstances, Engineer A's initial obligation to recommend the harm-minimization approach would remain, but its character would change: rather than being a prospective design input, it would function as a formal objection to an existing policy. Engineer A would be required to document that objection in writing, communicate it to the manufacturer's responsible decision-makers, and make clear that the existing passenger-priority policy creates foreseeable fatal risks to third parties that Engineer A regards as inconsistent with the public welfare paramount obligation. If the manufacturer declined to reconsider the policy after receiving this formal objection, Engineer A would face the question of whether to continue participating in the project. Continued participation in the design and certification of a system that Engineer A has formally identified as posing an unreasonable risk of fatal harm to third parties would be difficult to reconcile with the Code's prohibition on approving engineering documents not in conformity with sound engineering principles. Engineer A would therefore be obligated to decline to certify or approve the system, and to evaluate whether the severity and foreseeability of the third-party harm risk triggers any external reporting obligation under the public welfare paramount principle.
DetailsHad established national or industry standards governing autonomous vehicle harm-allocation decision logic existed at the time of Engineer A's assessment - analogous to the draft standards emerging in BER Case 96-4 - Engineer A's obligation to recommend further study before deployment would have been qualitatively different in character, though not necessarily stronger in absolute terms. The existence of applicable standards would have provided Engineer A with an external, professionally validated benchmark against which to evaluate the manufacturer's proposed algorithm, reducing the degree to which Engineer A's recommendation rested on Engineer A's individual ethical judgment. This would have made the recommendation more defensible, more actionable, and more likely to be accepted by the manufacturer. However, the absence of such standards does not weaken Engineer A's substantive obligation; it merely changes its epistemic basis. In the regulatory vacuum that actually exists, Engineer A's obligation to recommend further study is grounded in the recognition - itself drawn from the BER Case 96-4 analogy - that the absence of applicable standards is itself a safety-relevant fact that the manufacturer must be made aware of before deployment. The regulatory gap heightens the disclosure obligation and strengthens the case for recommending further interdisciplinary study, because it means that no external body has yet validated any harm-allocation approach as meeting a minimum standard of public safety. Engineer A's recommendation in the absence of standards must therefore be more explicitly provisional, more clearly flagged as reflecting one among several defensible approaches, and more strongly oriented toward recommending that deployment await the development of at least preliminary industry consensus.
DetailsIf Engineer A had proposed and the team had successfully identified a technical mitigation option - such as a sensor-based system capable of dynamically evaluating crash scenarios in real time rather than relying on pre-committed algorithmic harm-allocation logic - the core ethical dilemma between passenger safety and third-party harm minimization would be substantially but not fully dissolved. A dynamic real-time evaluation system would eliminate the most ethically troubling feature of pre-committed harm-allocation logic: the systematic, categorical pre-assignment of fatal risk to identifiable classes of persons based on their mode of transportation rather than on the actual circumstances of a specific crash. However, Engineer A would retain significant residual ethical obligations even if such a system were technically feasible. First, Engineer A would be obligated to assess and disclose the reliability limitations of the dynamic evaluation system - including sensor failure modes, edge cases where real-time evaluation is impossible, and the possibility that the system's dynamic decisions might themselves embed implicit harm-allocation biases through the weighting of its input variables. Second, Engineer A would be obligated to recommend that the dynamic system's decision logic be made transparent to consumers and regulators, since the ethical concerns about algorithmic opacity do not disappear merely because the algorithm operates in real time rather than through pre-commitment. Third, Engineer A would be obligated to recommend that the dynamic system undergo further study and testing before deployment, since the novelty of the technology means that its real-world performance across the full range of crash scenarios cannot be validated through design analysis alone. The identification of a technical mitigation option reduces but does not eliminate Engineer A's public safety obligations.
DetailsThe tension between the Faithful Agent Obligation Within Ethical Limits and the Third-Party Non-Client Welfare Consideration is resolved in this case by treating the automobile manufacturer's commercial interest in a passenger-protective algorithm as categorically subordinate to the welfare of pedestrians, cyclists, and motorcyclists who bear the fatal risk of the vehicle's pre-committed harm-allocation logic. The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons effectively establishes a lexical ordering: Public Welfare Paramount operates as a side-constraint on the faithful agent role, not merely as one factor to be weighed against client interest. This means Engineer A's duty to serve the automobile manufacturer does not extend to endorsing an algorithm that systematically transfers lethal risk onto non-consenting third parties in order to protect paying passengers. The case teaches that when client interest and third-party safety are genuinely zero-sum - as they are in a pre-committed harm-allocation algorithm - the NSPE Code resolves the tension by collapsing the faithful agent role at the boundary where client service would require engineering complicity in foreseeable third-party fatalities.
DetailsThe Competing Public Goods Balancing principle - which acknowledges that vehicle passengers hold legitimate safety interests - does not neutralize the Public Welfare Paramount principle in this case; rather, the two principles interact to produce a qualified rather than absolute harm-minimization mandate. The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly acknowledges that passenger safety is a genuine public good, not merely a commercial preference, but treats aggregate harm reduction across all affected parties as the governing metric when those goods conflict. This resolution carries an important teaching: the Competing Public Goods Balancing principle functions as a corrective against naive utilitarian aggregation that would ignore passenger welfare entirely, while Public Welfare Paramount prevents that corrective from being weaponized to justify algorithms that predictably sacrifice a greater number of third-party lives to protect a smaller number of passengers. The net effect is that Engineer A's recommendation must be grounded in a harm-minimization calculus that counts all lives equally, resisting both pure passenger-priority logic and any framing that treats third-party lives as infinitely more valuable than passenger lives.
DetailsThe interaction between the Autonomous System Moral Framework Transparency Obligation and the Regulatory Gap Safety Escalation Obligation - both activated by the absence of established national or industry standards governing autonomous vehicle harm-allocation ethics - produces a compounded disclosure duty that is stronger than either principle would generate in isolation. In the software testing context of BER Case 96-4, the regulatory gap triggered an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment. Transposed to the autonomous vehicle harm-allocation context, that same gap-triggered escalation obligation combines with the transparency obligation to require Engineer A not only to recommend further study but also to affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on a specific moral framework - utilitarian harm minimization - rather than on a settled engineering standard. The Completeness and Non-Selectivity in Advisory Opinions principle reinforces this synthesis: because any recommendation Engineer A makes in a regulatory vacuum will necessarily reflect contestable ethical assumptions, selective silence about those assumptions would itself be a form of incomplete and potentially misleading professional advice. The case therefore teaches that regulatory vacuums do not relieve disclosure obligations; they intensify them, because the engineer's judgment substitutes for the absent standard and must therefore be rendered fully transparent.
Detailsethical question 17
What are Engineer A’s ethical obligations?
DetailsDoes the Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly adopt a utilitarian ethical framework, and if so, is Engineer A obligated to disclose to the automobile manufacturer that this recommendation reflects a specific moral philosophy rather than a universally accepted engineering standard?
DetailsIn the absence of applicable regulatory or industry standards governing autonomous vehicle harm-allocation decision logic, does Engineer A have an affirmative obligation to recommend that the automobile manufacturer publicly disclose the ethical framework embedded in the vehicle's operating system to prospective consumers before deployment?
DetailsIf the automobile manufacturer, after receiving Engineer A's recommendation to minimize aggregate harm, decides to override that recommendation and program the vehicle to prioritize passenger safety above all others, what are Engineer A's remaining ethical obligations - including whether Engineer A must refuse to continue consulting on the project or escalate concerns externally?
DetailsDoes Engineer A's role as a consultant to the automobile manufacturer - rather than a direct employee - alter the scope or enforceability of his ethical obligations under the NSPE Code, particularly with respect to how far he must press concerns about harm-allocation design before his professional duty is satisfied?
DetailsDoes the Faithful Agent Obligation Within Ethical Limits - which requires Engineer A to serve the automobile manufacturer's interests - conflict with the Third-Party Non-Client Welfare Consideration, which demands that Engineer A weight the safety of pedestrians, cyclists, and motorcyclists equally or above the client's commercial interest in a passenger-protective algorithm?
DetailsDoes the Competing Public Goods Balancing principle - which acknowledges legitimate safety interests of vehicle passengers - conflict with the Public Welfare Paramount principle when the algorithm that best protects passengers is the same algorithm most likely to cause fatal harm to third parties, and if so, which principle should govern Engineer A's recommendation?
DetailsDoes the Autonomous System Moral Framework Transparency Obligation - requiring Engineer A to disclose the ethical assumptions embedded in the harm-allocation algorithm - conflict with the Informed Decision-Making Enablement Obligation owed to the automobile manufacturer client, insofar as full public transparency about the algorithm's moral logic could expose the manufacturer to legal liability or competitive disadvantage that the client has not consented to accept?
DetailsDoes the Regulatory Gap Safety Escalation Obligation - which in the software testing case required Engineer A to flag the absence of applicable standards as itself a safety concern warranting further study - conflict with the Completeness and Non-Selectivity in Advisory Opinions principle when the regulatory vacuum surrounding autonomous vehicle harm-allocation ethics means that any recommendation Engineer A makes will necessarily be incomplete, potentially leading to selective or premature guidance that could itself cause harm?
DetailsFrom a deontological perspective, does Engineer A have an absolute duty to recommend harm minimization for third parties regardless of the automobile manufacturer's commercial interests, and does this duty derive from the categorical imperative that engineers must never treat third-party lives as mere means to passenger safety ends?
DetailsFrom a consequentialist perspective, does the Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons adequately account for the aggregate welfare calculus across all possible crash scenarios, including cases where passenger sacrifice might produce net societal harm through reduced adoption of safer autonomous vehicles overall?
DetailsFrom a virtue ethics standpoint, does Engineer A demonstrate the professional integrity and moral courage required of a virtuous engineer when actively expressing concerns about harm-allocation algorithms within a risk assessment team that may face significant commercial pressure to prioritize passenger safety over third-party welfare?
DetailsFrom a deontological perspective, does Engineer A's obligation to disclose the moral framework embedded in the autonomous vehicle's harm-allocation algorithm to the public constitute a perfect duty under professional ethics codes, and does the absence of applicable regulatory standards heighten rather than relieve that disclosure duty?
DetailsIf Engineer A had remained silent or provided only a partial assessment of the third-party harm risks within the risk assessment team, would the automobile manufacturer have had sufficient information to make an ethically informed deployment decision, and would Engineer A's silence have constituted a violation of the faithful agent obligation?
DetailsWhat if the automobile manufacturer had already established a firm design policy prioritizing passenger safety above all third-party considerations before Engineer A joined the risk assessment team - would Engineer A's ethical obligations shift from recommendation to escalation or refusal to certify the system?
DetailsHad established national or industry standards governing autonomous vehicle harm-allocation decision logic existed at the time of Engineer A's assessment - analogous to the draft standards emerging in BER Case 96-4 - would Engineer A's obligation to recommend further study before deployment have been stronger, weaker, or qualitatively different in character?
DetailsIf Engineer A had proposed and the team had successfully identified a technical mitigation option - such as a sensor-based system capable of dynamically evaluating crash scenarios in real time rather than relying on pre-committed algorithmic harm-allocation logic - would the core ethical dilemma between passenger safety and third-party harm minimization have been dissolved, and what residual ethical obligations would Engineer A retain regarding transparency about the system's remaining limitations?
DetailsPhase 2E: Rich Analysis
causal normative link 6
Recommending additional safety testing directly fulfills Engineer A's obligation to advocate for further study before deployment, guided by the do-no-harm and public welfare paramount principles, while constrained by the requirement that such recommendations remain independent of business pressure.
DetailsPreparing a transparent technical report fulfills Engineer A's disclosure and informed-decision-enablement obligations by ensuring the client receives complete, non-selective information about harm-allocation logic and emerging standards, constrained by non-deception and completeness requirements.
DetailsActive participation in the risk assessment directly fulfills Engineer A's team participation obligation and enables competent evaluation of competing passenger and third-party safety trade-offs, constrained by the requirement to provide objective, good-faith technical input.
DetailsUnambiguously expressing safety concerns fulfills Engineer A's obligation to prioritize third-party welfare and public safety over client interests, guided by the do-no-harm and public welfare paramount principles, constrained by the requirement that concerns be expressed in good faith and with objective technical grounding.
DetailsExploring additional technical mitigation options fulfills Engineer A's harm-minimization and further-study obligations by seeking engineering solutions that reduce third-party risk before deployment, constrained by the requirement that such exploration remain independent of business or financial pressures.
DetailsProposing further study before deployment directly fulfills Engineer A's core obligation to recommend additional investigation when harm-allocation decision logic for autonomous vehicles lacks established regulatory standards, thereby upholding the do-no-harm principle and enabling the client to make an informed deployment decision while remaining constrained by the requirement that third-party safety not be subordinated to passenger priority or business pressure.
Detailsquestion emergence 17
This foundational question emerged because Engineer A's assignment to evaluate harm-allocation decision logic placed him at the intersection of multiple NSPE Code obligations - faithful agency, public welfare paramountcy, and do-no-harm - without a governing standard to rank them. The unavoidable crash scenario made the conflict concrete and irresolvable through purely technical means, forcing an ethical determination about the scope of his professional duty.
DetailsThis question arose because the Board's harm-minimization conclusion implicitly selected utilitarianism from among competing ethical frameworks without acknowledging that selection, creating tension between Engineer A's obligation to provide complete and non-selective advisory opinions and the practical norm of presenting engineering recommendations as technically rather than philosophically grounded. The regulatory standards vacuum amplified this tension by removing any external authority that might have resolved which framework is professionally required.
DetailsThis question emerged because the combination of an ethically pre-committed algorithm embedded in a consumer product and the complete absence of regulatory standards governing that commitment created a gap in which no external authority compels manufacturer disclosure - leaving Engineer A as the only professional actor positioned to recommend it. The tension between his faithful-agent obligation to the manufacturer and his public-welfare obligation to prospective consumers made the scope of his affirmative disclosure duty genuinely uncertain.
DetailsThis question arose because the manufacturer's hypothetical override transforms Engineer A from an advisor whose recommendation was considered into a continuing participant in a design he has professionally condemned, activating the tension between his faithful-agent role and his independent public-safety obligation. The absence of a clear NSPE Code threshold for when continued consulting becomes ethically impermissible - as opposed to merely uncomfortable - made the question of refusal and external escalation genuinely open.
DetailsThis question arose because the consultant relationship introduces a structural asymmetry not addressed in BER Case 96-4 - Engineer A has professional obligations identical to those of an employee engineer but lacks the organizational access and authority that the graduated internal escalation model presupposes. The combination of this structural gap with the high-stakes harm-allocation design context made it genuinely uncertain whether the consultant role narrows the point at which Engineer A's duty to press concerns is professionally satisfied.
DetailsThis question arose because the AV OS development process forced Engineer A into a role-conflict: the consultant relationship generates a prima facie duty of loyal service, but the unavoidable crash scenario makes that loyalty directly antagonistic to identifiable third-party lives. The absence of a hierarchy between these two warrants in the NSPE Code or applicable standards means neither obligation can simply override the other, producing a genuine ethical question.
DetailsThis question emerged because the technical fact of a zero-sum crash scenario collapsed the usual assumption that public welfare and client-interest safety can be simultaneously optimized, forcing a direct confrontation between two principles that normally operate in different domains. When the same algorithm cannot simultaneously maximize passenger safety and minimize third-party fatalities, the engineer must choose which principle governs, but neither the NSPE Code nor existing AV standards specifies a decision rule for this exact conflict.
DetailsThis question arose because the novel nature of algorithmic moral pre-commitment in AV systems means that transparency norms developed for conventional engineering (disclose technical facts to the client) are insufficient - the embedded ethical assumptions are themselves facts that third parties and the public have an interest in knowing. The conflict between client-directed disclosure and public transparency obligations emerged precisely because no prior framework had addressed whether the 'client' or 'the public' is the primary audience for moral framework disclosure in autonomous system design.
DetailsThis question emerged because the analogical transfer of BER Case 96-4's escalation logic to the AV ethics context revealed a structural disanalogy: in the software testing case, the emerging standard provided a reference point for what 'complete' guidance would look like, whereas in the AV harm-allocation case no such reference point exists, making the escalation obligation and the completeness obligation simultaneously applicable and mutually undermining. The regulatory vacuum thus transforms a procedural question (when to escalate) into a substantive ethical question about whether any recommendation is better than none.
DetailsThis question arose because the AV unavoidable crash scenario is structurally identical to classic deontological thought experiments (trolley problems, transplant cases) in which the categorical imperative and consequentialist reasoning yield directly opposed conclusions, and Engineer A's professional role forces a practical decision where philosophers have not reached consensus. The question crystallized because the algorithmic pre-commitment nature of AV design means Engineer A cannot defer the moral choice to the moment of the crash - the ethical framework must be selected in advance, making the deontological absolutism question practically urgent rather than merely theoretical.
DetailsThis question arose because the Board's conclusion was grounded in a direct, scenario-level consequentialist calculus, but the autonomous vehicle context introduces systemic feedback loops - particularly around consumer adoption - that a single-scenario harm-minimization rule does not capture. The tension between immediate crash-outcome welfare and long-run societal welfare from AV proliferation exposes an incompleteness in the warrant structure that the Board applied.
DetailsThis question emerged because virtue ethics evaluates not just what Engineer A recommends but how Engineer A behaves within a team environment subject to institutional pressure, and the BER 96-4 precedent established that business pressure must not subordinate technical judgment - yet the present case involves a team deliberation rather than a solo advisory role, making the behavioral standard for virtuous participation less determinate. The question surfaces the gap between the obligation to participate actively and the obligation to resist commercial distortion of technical conclusions.
DetailsThis question arose because deontological ethics distinguishes perfect duties (unconditional, non-defeasible) from imperfect duties (contextually variable), and the regulatory standards vacuum creates genuine ambiguity about which category the transparency obligation falls into - a question that would not arise if standards existed to define the disclosure perimeter. The embedded moral framework in the algorithm constitutes a form of undisclosed pre-commitment affecting third parties who cannot consent, which presses toward a perfect duty, but the absence of regulatory anchoring leaves the duty's scope and addressee (client vs. public) contested.
DetailsThis question arose because the faithful agent obligation is defined by what the principal needs to make an informed decision, not merely by what the agent chooses to share, and the counterfactual framing exposes whether Engineer A's participation was causally necessary for the manufacturer's ethical decision-making capacity. The BER 96-4 precedent - where Engineer A's obligation to prepare a complete technical report was foundational - makes the silence scenario particularly salient, as it tests whether omission and commission are treated symmetrically under the faithful agent warrant.
DetailsThis question arose because the pre-commitment of a design policy before Engineer A's engagement changes the ethical geometry: rather than shaping a recommendation from an open design space, Engineer A enters a constrained space where the key moral decision has already been made by the client, forcing the question of whether the faithful agent role still permits meaningful ethical influence or whether the obligation escalates to refusal. The absence of regulatory standards governing harm-allocation ethics means there is no external authority to which Engineer A can appeal to resolve the conflict between client policy and professional duty, making the escalation-versus-refusal boundary particularly indeterminate.
DetailsThis question emerged because the data of Engineer A operating in a regulatory standards vacuum, juxtaposed against the BER Case 96-4 precedent where emerging draft standards materially shaped the engineer's disclosure and further-study obligations, creates a genuine structural ambiguity: the argument's warrant - that Engineer A must recommend further study - can be authorized by either pure professional ethical judgment or by external standard-anchored duty, and these two authorizing grounds yield different conclusions about the strength and character of the obligation. The question therefore probes whether the warrant's force is intrinsic to the ethical principle or contingent on the institutional scaffolding of formal standards.
DetailsThis question emerged because the data of a proposed technical mitigation option contests the argument's foundational warrant - that the passenger-safety versus third-party-harm dilemma is an irreducible ethical conflict requiring pre-committed algorithmic resolution - by introducing the possibility that the dilemma's structure itself can be technically transformed rather than merely managed. The question then probes the residual ethical obligations that survive this transformation, recognizing that the Autonomous Vehicle Harm Allocation Algorithmic Pre-Commitment Ethical Constraint and the Autonomous System Moral Framework Transparency Obligation do not automatically dissolve when a technical solution is proposed, because the rebuttal condition of residual system limitations ensures that Engineer A's disclosure and transparency obligations persist in a qualitatively different but not eliminated form.
Detailsresolution pattern 18
The board concluded that Engineer A must recommend harm minimization for the least number of persons because the Public Welfare Paramount principle - codified in both I.1 and II.1 - establishes a non-negotiable floor that subordinates client commercial interests when third-party lives are foreseeably at stake, and the binary structure of the crash scenario makes aggregate harm minimization the most defensible first-order ethical directive available to Engineer A.
DetailsThe board concluded that Engineer A bears an affirmative disclosure obligation because presenting a utilitarian harm-minimization recommendation without identifying it as such would violate the completeness and non-selectivity norm that governs advisory opinions, and because the regulatory vacuum surrounding autonomous vehicle ethics means the manufacturer has no external standard to consult - making Engineer A's transparent characterization of the recommendation's philosophical basis the only available mechanism for genuinely informed manufacturer consent.
DetailsThe board concluded that Engineer A's ethical obligations survive a manufacturer override because the Public Welfare Paramount duty is not extinguished by client disagreement, and that the consultant relationship is legally but not ethically distinguishable from direct employment - meaning Engineer A must escalate internally, document the concern, and ultimately refuse to certify or withdraw if the manufacturer adopts a harm-allocation algorithm that Engineer A cannot professionally endorse as consistent with the obligation to hold paramount the safety of the public, including third-party non-clients.
DetailsThe board concluded that Engineer A's harm-minimization recommendation is a necessary but not sufficient discharge of professional obligations, because the analogical reasoning from BER Case 96-4 requires Engineer A to recommend further study of dynamic technical alternatives before treating the ethical dilemma as irreducible - and that if such alternatives are found infeasible, Engineer A must document that finding transparently so the manufacturer's deployment decision is fully informed rather than premised on an unexamined assumption that the binary choice was unavoidable.
DetailsThe board concluded that Engineer A has an affirmative obligation to disclose to the manufacturer that the harm-minimization recommendation reflects a utilitarian moral philosophy, that deontological and other frameworks exist and produce different algorithmic outcomes, and that the selection among them is not a purely technical determination - because failure to make this disclosure would present the manufacturer with an incomplete picture of the decision it is actually making, impairing its ability to give informed consent to the embedded ethical framework and potentially exposing it to legal and reputational consequences it did not knowingly accept.
DetailsThe board concluded that Engineer A must affirmatively recommend pre-sale public disclosure of the vehicle's ethical framework because three independent obligations converge on that result - public welfare protection, algorithmic transparency for pre-committed harm decisions, and the heightened duty that arises precisely when no regulator has yet filled the standards vacuum - and because the absence of a legal disclosure requirement does not extinguish the professional ethical duty to recommend it.
DetailsThe board concluded that Engineer A retains three sequenced residual obligations after a manufacturer override - written documentation of disagreement, assessment of whether certification must be withheld under II.1.b., and evaluation of external reporting triggers - because the public welfare paramount obligation is a continuing duty that is not discharged by a single recommendation that is subsequently ignored, and because allowing a harmful design to proceed without further action would reduce Engineer A's role to a formality rather than a substantive ethical check.
DetailsThe board concluded that Engineer A's consultant status heightens the independence and completeness obligations - because the client engaged that independence as a resource - while compressing the internal escalation sequence, and creates no basis whatsoever for a reduced or qualified duty of care toward third-party public safety, because the NSPE Code's paramount obligation is defined by the nature of the engineering work, not the contractual form of the engagement.
DetailsThe board concluded that the tension between faithful agent duty and third-party welfare is real but resolvable without genuine conflict because the Code's hierarchy is unambiguous - public welfare is paramount and the faithful agent duty is subordinate - meaning Engineer A must recommend harm minimization as a non-negotiable baseline and may then constructively assist the manufacturer in developing a commercially viable product within that ethical constraint, but may not suppress or soften the recommendation to protect the manufacturer's commercial position.
DetailsThe board concluded that Engineer A's deontological obligation extends beyond recommending harm minimization to recommending that the design team investigate whether any algorithm can avoid pre-committing to the instrumental use of any person's life - for example through crash avoidance rather than crash outcome optimization - and to flagging to the manufacturer that the binary framing of the harm-allocation problem may itself embed morally problematic assumptions that warrant further study before deployment, because the categorical imperative applies symmetrically and prohibits treating any class of persons as mere means regardless of which side of the binary the algorithm favors.
DetailsThe board concluded that Engineer A satisfies the virtue ethics standard not merely by filing a technically correct recommendation but by actively, persistently, and accessibly articulating the moral stakes of harm-allocation design to non-engineer stakeholders, and by exercising phronesis to recognize that recommending further interdisciplinary study is itself the virtuous act when the ethical question is genuinely contested and the team's mandate is insufficient to resolve it.
DetailsThe board concluded that Engineer A's silence or partial assessment would simultaneously violate the faithful agent obligation (by depriving the manufacturer of complete information) and the public welfare paramount obligation (by allowing a dangerous design to proceed without formal objection), invoking the III.1.b. analogy to establish that a foreseeably fatal harm-allocation algorithm is not a successful engineering outcome and must be identified as such.
DetailsThe board concluded that a pre-existing firm passenger-priority policy materially shifts Engineer A's obligations from recommendation to escalation and potential refusal to certify, because continued participation in certifying a system Engineer A has formally identified as posing unreasonable third-party fatal risk would itself constitute a Code violation under the prohibition on approving non-conforming engineering documents, and may additionally trigger external reporting obligations under the public welfare paramount principle.
DetailsThe board concluded that the existence of applicable standards would have changed the character of Engineer A's recommendation - making it more externally grounded and more defensible - but would not have strengthened the substantive obligation, because in the actual regulatory vacuum the obligation to recommend further study is independently grounded in the recognition that the gap itself is a safety-relevant fact requiring disclosure and that no external body has yet validated any harm-allocation approach as publicly safe.
DetailsThe board concluded that a successfully identified dynamic real-time evaluation system would substantially dissolve the core ethical dilemma by eliminating categorical pre-assignment of fatal risk, but would leave Engineer A with three distinct residual obligations: assessing and disclosing the system's reliability limitations and failure modes, recommending transparency of the dynamic decision logic to consumers and regulators, and recommending further study and testing before deployment given the technology's novelty and the impossibility of validating real-world performance through design analysis alone.
DetailsThe Board concluded that Engineer A must recommend harm minimization for the least number of persons because the NSPE Code's paramount public safety obligation categorically overrides the faithful agent duty when those two obligations are genuinely zero-sum - meaning that serving the manufacturer's commercial interest in a passenger-protective algorithm would require Engineer A to endorse a system that predictably transfers lethal risk onto non-consenting third parties, a form of complicity the Code does not permit regardless of the client relationship.
DetailsThe Board concluded that neither pure passenger-priority logic nor a framework that treats third-party lives as infinitely more valuable than passenger lives is ethically defensible under the NSPE Code; instead, the interaction of Public Welfare Paramount and Competing Public Goods Balancing produces a harm-minimization standard that counts all affected lives equally and selects the algorithm that minimizes aggregate fatalities across all parties, thereby resolving the conflict between the two principles through a qualified rather than absolute utilitarian calculus.
DetailsThe Board concluded that the absence of applicable regulatory or industry standards does not relieve Engineer A of disclosure obligations but instead intensifies them, because in a regulatory vacuum the engineer's own moral framework substitutes for the absent standard and must therefore be rendered fully transparent to the client; combining the transparency obligation with the gap-triggered escalation duty from BER Case 96-4 and the completeness principle, the Board determined that Engineer A must both recommend further study before deployment and affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on utilitarian harm minimization rather than on any settled engineering consensus.
DetailsPhase 3: Decision Points
canonical decision point 6
How should Engineer A discharge his obligations as a member of the automobile manufacturer's risk assessment team when the crash-avoidance algorithm's harm-distribution logic raises unresolved ethical and safety questions - specifically, whether to actively express concerns and recommend further study before deployment, or to defer to the team's commercial orientation and provide a narrower assessment?
DetailsGiven that no applicable national or industry standards govern autonomous vehicle harm-allocation decision logic, must Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than a technically mandated norm - and must Engineer A further recommend that the manufacturer publicly disclose the algorithm's embedded moral framework to prospective consumers before deployment?
DetailsIf the automobile manufacturer overrides Engineer A's harm-minimization recommendation and proceeds with a passenger-priority algorithm that foreseeably creates fatal risk for pedestrians, cyclists, and motorcycle riders, what actions must Engineer A take - and does Engineer A's consultant status affect the scope or sequence of those obligations?
DetailsShould Engineer A recommend that the autonomous vehicle's operating system minimize harm to the least number of persons, and actively express that concern within the risk assessment team even under commercial pressure to prioritize passenger safety?
DetailsMust Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than an established regulatory or industry standard, and must Engineer A further recommend that the manufacturer publicly disclose the vehicle's embedded ethical decision logic to consumers before deployment?
DetailsIf the automobile manufacturer overrides Engineer A's harm-minimization recommendation and programs the vehicle to prioritize passenger safety above third-party welfare, what actions must Engineer A take - and does Engineer A's consultant status alter the scope or sequence of those obligations?
DetailsPhase 4: Narrative Elements
Characters 5
Timeline Events 23 -- synthesized from Step 3 temporal dynamics
An engineer faces a complex ethical dilemma involving the design of an autonomous system, where decisions must be made about how potential harms and safety responsibilities are allocated among stakeholders. This foundational situation establishes the core tension between technical capability, public safety, and professional obligation.
The engineer formally recommends that the autonomous system undergo additional rounds of safety testing before any further development or deployment decisions are made. This recommendation reflects the engineer's professional duty to ensure that potential failure modes are thoroughly identified and addressed before the system can pose risks to the public.
The engineer prepares a comprehensive and candid technical report that openly documents the system's known limitations, uncertainties, and safety-related findings. By prioritizing transparency over convenience, this report ensures that all relevant parties have accurate information needed to make informed decisions about the system's future.
The engineer takes an active and substantive role in the formal risk assessment process, contributing technical expertise to evaluate the likelihood and severity of potential system failures. This engagement demonstrates the engineer's commitment to ensuring that risk evaluations are grounded in rigorous analysis rather than assumptions or commercial pressures.
The engineer clearly and directly communicates identified safety concerns to supervisors, clients, or other decision-makers, leaving no ambiguity about the nature or seriousness of the risks involved. This decisive action upholds the engineer's ethical obligation to prioritize public safety even when doing so may create professional friction or delay project timelines.
The engineer proactively investigates and proposes additional technical measures that could reduce or eliminate the identified safety risks within the autonomous system. This constructive approach demonstrates that raising safety concerns is paired with a genuine effort to find workable engineering solutions rather than simply halting progress.
The engineer formally advocates for delaying deployment of the autonomous system until further research and study can adequately address the unresolved safety questions. This recommendation prioritizes long-term public welfare over short-term project milestones, reflecting the core principle that engineers must not approve systems whose safety has not been sufficiently validated.
A critical discovery is made that specific software components within the autonomous system directly govern safety-critical functions, meaning that any errors or failures in this code could result in serious harm. This identification significantly escalates the ethical stakes of the case, as it establishes that the system's risks are not merely theoretical but are tied to concrete, high-consequence operational scenarios.
Draft Safety Standards Emerge
Financial Pressure on Testing
Autonomous Vehicle AV OS Development Initiated
Unavoidable Crash Scenario Identified
Precedent Case Principles Activated
Algorithmic Ethics Gap Recognized
Tension between Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation / Autonomous Vehicle Further Study Recommendation Before Deployment Obligation and Engineer A AV Client Interest Third-Party Safety Priority Constraint
Tension between Autonomous Vehicle Harm Minimization Algorithm Completeness Disclosure Obligation / Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation and Engineer A AV Regulatory Standards Vacuum Escalation Permissibility Constraint
How should Engineer A discharge his obligations as a member of the automobile manufacturer's risk assessment team when the crash-avoidance algorithm's harm-distribution logic raises unresolved ethical and safety questions — specifically, whether to actively express concerns and recommend further study before deployment, or to defer to the team's commercial orientation and provide a narrower assessment?
Given that no applicable national or industry standards govern autonomous vehicle harm-allocation decision logic, must Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than a technically mandated norm — and must Engineer A further recommend that the manufacturer publicly disclose the algorithm's embedded moral framework to prospective consumers before deployment?
If the automobile manufacturer overrides Engineer A's harm-minimization recommendation and proceeds with a passenger-priority algorithm that foreseeably creates fatal risk for pedestrians, cyclists, and motorcycle riders, what actions must Engineer A take — and does Engineer A's consultant status affect the scope or sequence of those obligations?
Should Engineer A recommend that the autonomous vehicle's operating system minimize harm to the least number of persons, and actively express that concern within the risk assessment team even under commercial pressure to prioritize passenger safety?
Must Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than an established regulatory or industry standard, and must Engineer A further recommend that the manufacturer publicly disclose the vehicle's embedded ethical decision logic to consumers before deployment?
If the automobile manufacturer overrides Engineer A's harm-minimization recommendation and programs the vehicle to prioritize passenger safety above third-party welfare, what actions must Engineer A take — and does Engineer A's consultant status alter the scope or sequence of those obligations?
That being said, to address the specific question posed in the case, Engineer A has an obligation to state that the prime ethical obligation of the vehicle operation is to minimize harm to affect the
Ethical Tensions 9
Decision Moments 6
- Actively participate in all risk assessment team deliberations, formally document and unambiguously express concerns about the harm-allocation algorithm's third-party safety implications in writing, recommend that the manufacturer commission further interdisciplinary study — including ethical framework analysis and technical mitigation investigation — before deploying the operating system board choice
- Participate in the risk assessment team's technical evaluation, raise third-party harm concerns verbally during team deliberations, and provide a written recommendation that identifies the harm-minimization approach as preferable — without separately recommending that deployment be delayed for further study, on the ground that the team's collective judgment and the manufacturer's business timeline should govern the deployment decision once the technical recommendation has been delivered
- Participate in the risk assessment team's evaluation, recommend harm minimization as the preferred algorithm design, and separately recommend that the manufacturer explore technical mitigation options — such as dynamic real-time crash evaluation systems — as a means of reducing the need for pre-committed harm-allocation logic, framing further study as a technical improvement opportunity rather than a deployment prerequisite
- Explicitly identify in the written risk assessment report that the harm-minimization recommendation is grounded in a utilitarian ethical framework, present the deontological alternative framework and its different algorithmic implications with equal completeness, and include a specific advisory that the manufacturer implement pre-sale public disclosure of the vehicle's harm-allocation decision logic before deployment board choice
- Present both the passenger-priority and harm-minimization frameworks objectively in the risk assessment report — including their respective advantages and disadvantages — without characterizing either as utilitarian or deontological, and recommend that the manufacturer consult legal counsel and ethics advisors regarding consumer disclosure obligations, treating the philosophical labeling and public disclosure questions as outside the scope of the engineering risk assessment mandate
- Identify the philosophical basis of the harm-minimization recommendation in the internal risk assessment report delivered to the manufacturer, recommend that the manufacturer seek interdisciplinary ethics review before finalizing the algorithm design, but decline to recommend specific consumer-facing public disclosure on the ground that disclosure strategy is a legal and business decision within the manufacturer's exclusive authority as client
- Formally document the safety disagreement in writing addressed to the manufacturer's responsible decision-makers, clearly stating that the passenger-priority algorithm creates foreseeable fatal risk to third parties inconsistent with the public welfare paramount obligation; assess whether the system crosses the certification threshold under Code II.1.b and decline to certify if it does; and evaluate whether the severity and foreseeability of third-party harm triggers external reporting obligations if internal escalation fails board choice
- Formally document the safety disagreement in writing, deliver it to the manufacturer's project lead, and — upon being overruled — treat the internal escalation obligation as discharged given the compressed escalation sequence available in a consultant relationship; continue participating in the project in an advisory capacity without certifying the system, on the ground that declining to certify without an applicable regulatory standard to anchor the refusal would exceed the scope of the consultant's professional mandate
- Document the safety disagreement in the final risk assessment report, recommend that the manufacturer obtain an independent ethics and safety review of the passenger-priority algorithm before deployment, and withdraw from the consulting engagement if the manufacturer proceeds without that review — treating withdrawal as the appropriate professional response that preserves Engineer A's integrity without triggering external reporting obligations that the NSPE Code reserves for more severe and imminent public safety threats
- Formally recommend that the harm-allocation algorithm minimize harm to the least number of persons, actively express this position within the risk assessment team, document the recommendation in writing, and propose further interdisciplinary study and exploration of dynamic real-time mitigation alternatives before deployment board choice
- Present the harm-minimization approach as one among several technically defensible design options, defer to the risk assessment team's collective judgment on which framework to adopt, and limit Engineer A's formal output to a balanced technical summary of competing approaches without a personal recommendation
- Recommend harm minimization internally within the risk assessment team but accept the manufacturer's passenger-priority preference as a legitimate design policy choice within the manufacturer's authority, confining Engineer A's role to optimizing the passenger-priority algorithm's technical implementation rather than contesting the underlying policy
- Explicitly disclose to the automobile manufacturer in the technical report that the harm-minimization recommendation reflects a utilitarian moral philosophy rather than an established engineering standard, identify deontological and other alternative frameworks that yield different outcomes, and affirmatively recommend that the manufacturer implement pre-sale consumer disclosure of the vehicle's harm-allocation decision logic as a condition of ethically responsible deployment board choice
- Disclose the philosophical basis of the harm-minimization recommendation to the manufacturer's engineering and legal teams as part of the confidential consulting deliverable, but limit the consumer disclosure recommendation to a general advisory that the manufacturer consult legal counsel about disclosure obligations, leaving the public transparency decision to the manufacturer's business judgment
- Present the harm-minimization recommendation as Engineer A's professional judgment grounded in the NSPE Code's public welfare paramount obligation without characterizing it as utilitarian or labeling its philosophical foundations, on the basis that the Code itself — rather than a contested moral philosophy — provides the normative authority for the recommendation, and defer consumer disclosure questions to the manufacturer and its regulatory counsel
- Formally document the safety disagreement in writing to the manufacturer's responsible decision-makers, decline to certify or approve the passenger-priority system if it cannot be reconciled with the public welfare paramount obligation, and evaluate whether the foreseeability and severity of third-party fatal harm triggers an external reporting obligation beyond the consulting engagement board choice
- Document the safety concern in the consulting deliverable, communicate the disagreement verbally to the manufacturer's project lead, and continue participating in the technical optimization of the passenger-priority system while treating the manufacturer's policy override as a legitimate business decision within the client's authority — on the basis that Engineer A's professional duty is satisfied by having raised the concern and that the manufacturer bears ultimate design responsibility
- Withdraw from the consulting engagement upon the manufacturer's override without formal written documentation of the specific certification threshold crossed, on the basis that the consultant relationship does not obligate Engineer A to pursue multi-tier internal escalation through an organization in which Engineer A holds no employment standing, and that withdrawal itself constitutes a sufficient professional signal of non-endorsement