Step 4: Full View
Entities, provisions, decisions, and narrative
Full Entity Graph
Loading...Entity Types
Synthesis Reasoning Flow
Shows how NSPE provisions inform questions and conclusions - the board's reasoning chainNode Types & Relationships
→ Question answered by Conclusion
→ Provision applies to Entity
NSPE Code Provisions Referenced
View ExtractionI.1. I.1.
Full Text:
Hold paramount the safety, health, and welfare of the public.
Applies To:
II.1. II.1.
Full Text:
Engineers shall hold paramount the safety, health, and welfare of the public.
Applies To:
II.1.b. II.1.b.
Full Text:
Engineers shall approve only those engineering documents that are in conformity with applicable standards.
Applies To:
II.3.b. II.3.b.
Full Text:
Engineers may express publicly technical opinions that are founded upon knowledge of the facts and competence in the subject matter.
Applies To:
III.1.b. III.1.b.
Full Text:
Engineers shall advise their clients or employers when they believe a project will not be successful.
Applies To:
Cited Precedent Cases
View ExtractionBER Case 96-4 analogizing
Principle Established:
Engineers have a professional obligation to recommend additional testing or study when public health, safety, and welfare may be at risk, and must make recommendations based solely on technical findings rather than business considerations, so that employers can make informed decisions.
Citation Context:
The Board cited this case to establish that engineers must balance technical safety obligations against business pressures, and that the overriding ethical responsibility is to hold paramount the safety, health, and welfare of the public. It is used as an analogous precedent for Engineer A's obligations in the autonomous vehicle context.
Relevant Excerpts:
"One example of this was BER Case 96-4 , which involved software design testing. In that case, Engineer A was employed by a software company and was involved in the design of specialized software"
"Although the facts in the present case are somewhat different than those in Case 96-4 , the Board of Ethical Review believes that several points discussed in the previous case are pertinent to the case at hand."
"In BER Case 96-4 , Engineer A's ethical concerns in the case were not related directly to the safety of the software, but instead to the availability of a new draft safety testing standard"
Questions & Conclusions
View ExtractionQuestion 1 Board Question
What are Engineer A’s ethical obligations?
That being said, to address the specific question posed in the case, Engineer A has an obligation to state that the prime ethical obligation of the vehicle operation is to minimize harm to affect the least number of persons.
Question 2 Implicit
Does the Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly adopt a utilitarian ethical framework, and if so, is Engineer A obligated to disclose to the automobile manufacturer that this recommendation reflects a specific moral philosophy rather than a universally accepted engineering standard?
Beyond the Board's finding that Engineer A must recommend minimizing harm to the least number of persons, Engineer A bears an additional obligation to explicitly disclose to the automobile manufacturer that this recommendation is grounded in a utilitarian ethical framework rather than in any established regulatory or industry standard. Because no applicable national or industry standards governing autonomous vehicle harm-allocation decision logic currently exist, Engineer A cannot represent the harm-minimization recommendation as a technically mandated or universally accepted engineering norm. Presenting it as such would violate the completeness and non-selectivity obligation that governs Engineer A's advisory role. Engineer A must therefore clearly communicate to the automobile manufacturer that the recommendation reflects a specific moral philosophy - one that reasonable engineers and ethicists might contest - so that the manufacturer can make a genuinely informed deployment decision. This disclosure obligation is heightened, not relieved, by the regulatory standards vacuum, because the absence of external standards places the full burden of ethical transparency on Engineer A as the professional advisor.
The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly adopts a utilitarian ethical framework - specifically, an aggregate harm-minimization calculus - without acknowledging that this represents one among several defensible moral philosophies rather than a universally accepted engineering standard. A deontological framework, for instance, might prohibit the vehicle from actively redirecting harm toward any third party regardless of aggregate outcome, treating each person's life as inviolable rather than as a unit in a welfare sum. Because Engineer A is advising an automobile manufacturer on a design decision that will be embedded in a consumer product affecting the public, Engineer A has an affirmative obligation under the principle of Completeness and Non-Selectivity in Advisory Opinions to disclose to the manufacturer that the harm-minimization recommendation reflects a specific moral philosophy, that alternative frameworks exist and yield different algorithmic outcomes, and that the selection among them is not a purely technical determination. Failure to make this disclosure would present the manufacturer with an incomplete picture of the decision it is actually making, impairing its ability to give informed consent to the embedded ethical framework and potentially exposing it to legal and reputational consequences it did not knowingly accept.
The interaction between the Autonomous System Moral Framework Transparency Obligation and the Regulatory Gap Safety Escalation Obligation - both activated by the absence of established national or industry standards governing autonomous vehicle harm-allocation ethics - produces a compounded disclosure duty that is stronger than either principle would generate in isolation. In the software testing context of BER Case 96-4, the regulatory gap triggered an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment. Transposed to the autonomous vehicle harm-allocation context, that same gap-triggered escalation obligation combines with the transparency obligation to require Engineer A not only to recommend further study but also to affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on a specific moral framework - utilitarian harm minimization - rather than on a settled engineering standard. The Completeness and Non-Selectivity in Advisory Opinions principle reinforces this synthesis: because any recommendation Engineer A makes in a regulatory vacuum will necessarily reflect contestable ethical assumptions, selective silence about those assumptions would itself be a form of incomplete and potentially misleading professional advice. The case therefore teaches that regulatory vacuums do not relieve disclosure obligations; they intensify them, because the engineer's judgment substitutes for the absent standard and must therefore be rendered fully transparent.
Question 3 Implicit
In the absence of applicable regulatory or industry standards governing autonomous vehicle harm-allocation decision logic, does Engineer A have an affirmative obligation to recommend that the automobile manufacturer publicly disclose the ethical framework embedded in the vehicle's operating system to prospective consumers before deployment?
The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly adopts a utilitarian ethical framework - specifically, an aggregate harm-minimization calculus - without acknowledging that this represents one among several defensible moral philosophies rather than a universally accepted engineering standard. A deontological framework, for instance, might prohibit the vehicle from actively redirecting harm toward any third party regardless of aggregate outcome, treating each person's life as inviolable rather than as a unit in a welfare sum. Because Engineer A is advising an automobile manufacturer on a design decision that will be embedded in a consumer product affecting the public, Engineer A has an affirmative obligation under the principle of Completeness and Non-Selectivity in Advisory Opinions to disclose to the manufacturer that the harm-minimization recommendation reflects a specific moral philosophy, that alternative frameworks exist and yield different algorithmic outcomes, and that the selection among them is not a purely technical determination. Failure to make this disclosure would present the manufacturer with an incomplete picture of the decision it is actually making, impairing its ability to give informed consent to the embedded ethical framework and potentially exposing it to legal and reputational consequences it did not knowingly accept.
In the absence of applicable regulatory or industry standards governing autonomous vehicle harm-allocation decision logic, Engineer A has an affirmative obligation to recommend that the automobile manufacturer publicly disclose the ethical framework embedded in the vehicle's operating system to prospective consumers before deployment. This obligation arises from the convergence of three independent sources: first, the Public Welfare Paramount principle, which requires that the public be protected not only from physical harm but from material deception about the nature of products that affect their safety; second, the Autonomous System Moral Framework Transparency Obligation, which recognizes that when an algorithm pre-commits to a harm-allocation outcome on behalf of a user who cannot intervene in real time, that user and affected third parties have a legitimate interest in knowing the decision logic governing their fate; and third, the regulatory standards vacuum itself, which - as the Board recognized analogously in BER Case 96-4 - heightens rather than relieves Engineer A's disclosure obligations precisely because no external regulatory body has yet stepped in to mandate transparency. The absence of a legal requirement to disclose does not extinguish the professional ethical duty to recommend disclosure. Engineer A's recommendation should therefore include not only the harm-minimization algorithm design but also a specific advisory that the manufacturer implement pre-sale consumer disclosure of the vehicle's decision logic as a condition of ethically responsible deployment.
The interaction between the Autonomous System Moral Framework Transparency Obligation and the Regulatory Gap Safety Escalation Obligation - both activated by the absence of established national or industry standards governing autonomous vehicle harm-allocation ethics - produces a compounded disclosure duty that is stronger than either principle would generate in isolation. In the software testing context of BER Case 96-4, the regulatory gap triggered an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment. Transposed to the autonomous vehicle harm-allocation context, that same gap-triggered escalation obligation combines with the transparency obligation to require Engineer A not only to recommend further study but also to affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on a specific moral framework - utilitarian harm minimization - rather than on a settled engineering standard. The Completeness and Non-Selectivity in Advisory Opinions principle reinforces this synthesis: because any recommendation Engineer A makes in a regulatory vacuum will necessarily reflect contestable ethical assumptions, selective silence about those assumptions would itself be a form of incomplete and potentially misleading professional advice. The case therefore teaches that regulatory vacuums do not relieve disclosure obligations; they intensify them, because the engineer's judgment substitutes for the absent standard and must therefore be rendered fully transparent.
Question 4 Implicit
If the automobile manufacturer, after receiving Engineer A's recommendation to minimize aggregate harm, decides to override that recommendation and program the vehicle to prioritize passenger safety above all others, what are Engineer A's remaining ethical obligations - including whether Engineer A must refuse to continue consulting on the project or escalate concerns externally?
The Board's conclusion that Engineer A must recommend harm minimization for the least number of persons does not fully resolve what Engineer A's obligations become if the automobile manufacturer overrides that recommendation and elects to program the vehicle to prioritize passenger safety above third-party welfare. In that scenario, Engineer A's ethical obligations do not terminate upon delivery of the initial recommendation. Engineer A must first pursue graduated internal escalation within the risk assessment team and up the manufacturer's organizational hierarchy, clearly documenting the safety concern and its basis in the public welfare paramount principle. If internal escalation fails to produce a design that Engineer A can professionally certify as consistent with the obligation to hold paramount the safety, health, and welfare of the public - including pedestrians, cyclists, and motorcyclists who are third parties to the client relationship - Engineer A must consider whether continued participation in the project constitutes implicit endorsement of a harm-allocation algorithm that foreseeably causes fatal injury to third parties. At that threshold, refusal to certify the system or withdrawal from the engagement may be required. The consultant relationship does not diminish this obligation; the NSPE Code's public welfare paramount duty applies equally to consultants and employees, and the absence of a direct employment relationship does not reduce the enforceability of Engineer A's professional ethical duties.
If the automobile manufacturer, after receiving Engineer A's recommendation to minimize aggregate harm, decides to override that recommendation and program the vehicle to prioritize passenger safety above all others, Engineer A's ethical obligations do not terminate at the point of initial recommendation. Engineer A retains at minimum three residual obligations. First, under the principle of Graduated Internal Escalation Before External Reporting, Engineer A must formally document the disagreement and communicate to the manufacturer's decision-makers - in writing - that the passenger-priority algorithm creates a foreseeable risk of fatal harm to third parties that Engineer A regards as ethically unjustifiable, ensuring that the override decision is made with full awareness of its consequences rather than by default or inattention. Second, Engineer A must assess whether the resulting system design crosses the threshold from a debatable design choice into a design that Engineer A cannot in good conscience certify as safe for public deployment; if it does, Engineer A must decline to approve or certify the system under Code provision II.1.b., which prohibits approval of engineering documents not in conformity with sound engineering principles protective of public safety. Third, if internal escalation fails and Engineer A concludes that deployment of the passenger-priority algorithm poses an unreasonable risk of fatal harm to identifiable third-party classes - pedestrians, cyclists, motorcycle riders - Engineer A must evaluate whether external reporting obligations are triggered, recognizing that the NSPE Code's public welfare paramount obligation is not discharged merely by voicing concern internally when that concern is overridden and the harmful design proceeds.
Question 5 Implicit
Does Engineer A's role as a consultant to the automobile manufacturer - rather than a direct employee - alter the scope or enforceability of his ethical obligations under the NSPE Code, particularly with respect to how far he must press concerns about harm-allocation design before his professional duty is satisfied?
The Board's conclusion that Engineer A must recommend harm minimization for the least number of persons does not fully resolve what Engineer A's obligations become if the automobile manufacturer overrides that recommendation and elects to program the vehicle to prioritize passenger safety above third-party welfare. In that scenario, Engineer A's ethical obligations do not terminate upon delivery of the initial recommendation. Engineer A must first pursue graduated internal escalation within the risk assessment team and up the manufacturer's organizational hierarchy, clearly documenting the safety concern and its basis in the public welfare paramount principle. If internal escalation fails to produce a design that Engineer A can professionally certify as consistent with the obligation to hold paramount the safety, health, and welfare of the public - including pedestrians, cyclists, and motorcyclists who are third parties to the client relationship - Engineer A must consider whether continued participation in the project constitutes implicit endorsement of a harm-allocation algorithm that foreseeably causes fatal injury to third parties. At that threshold, refusal to certify the system or withdrawal from the engagement may be required. The consultant relationship does not diminish this obligation; the NSPE Code's public welfare paramount duty applies equally to consultants and employees, and the absence of a direct employment relationship does not reduce the enforceability of Engineer A's professional ethical duties.
Engineer A's role as a consultant rather than a direct employee does not diminish the substantive scope of his ethical obligations under the NSPE Code, but it does affect the procedural mechanisms available to discharge them. The Code's public welfare paramount obligation applies with equal force to consultants and employees; Engineer A cannot invoke the consultant relationship as a basis for providing a narrower or more deferential safety assessment than an employee engineer would be required to provide. However, the consultant relationship does affect how far Engineer A must press concerns before his professional duty is satisfied in one specific respect: a consultant who has formally documented a safety concern, communicated it clearly to the client's responsible decision-makers, and been overruled has discharged the internal escalation component of his obligation more rapidly than an employee embedded in a hierarchical organization with multiple escalation tiers. The consultant's professional independence - which is itself a resource that the client engaged - means that Engineer A's obligation to provide an honest, complete, and unvarnished assessment of third-party harm risks is if anything stronger than that of an employee who might face internal organizational pressure to soften findings. Accordingly, Engineer A's consultant status heightens the independence and completeness obligations while compressing the internal escalation sequence, and does not create any basis for a reduced or qualified duty of care toward third-party public safety.
Question 6 Principle Tension
Does the Autonomous System Moral Framework Transparency Obligation - requiring Engineer A to disclose the ethical assumptions embedded in the harm-allocation algorithm - conflict with the Informed Decision-Making Enablement Obligation owed to the automobile manufacturer client, insofar as full public transparency about the algorithm's moral logic could expose the manufacturer to legal liability or competitive disadvantage that the client has not consented to accept?
The interaction between the Autonomous System Moral Framework Transparency Obligation and the Regulatory Gap Safety Escalation Obligation - both activated by the absence of established national or industry standards governing autonomous vehicle harm-allocation ethics - produces a compounded disclosure duty that is stronger than either principle would generate in isolation. In the software testing context of BER Case 96-4, the regulatory gap triggered an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment. Transposed to the autonomous vehicle harm-allocation context, that same gap-triggered escalation obligation combines with the transparency obligation to require Engineer A not only to recommend further study but also to affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on a specific moral framework - utilitarian harm minimization - rather than on a settled engineering standard. The Completeness and Non-Selectivity in Advisory Opinions principle reinforces this synthesis: because any recommendation Engineer A makes in a regulatory vacuum will necessarily reflect contestable ethical assumptions, selective silence about those assumptions would itself be a form of incomplete and potentially misleading professional advice. The case therefore teaches that regulatory vacuums do not relieve disclosure obligations; they intensify them, because the engineer's judgment substitutes for the absent standard and must therefore be rendered fully transparent.
Question 7 Principle Tension
Does the Faithful Agent Obligation Within Ethical Limits - which requires Engineer A to serve the automobile manufacturer's interests - conflict with the Third-Party Non-Client Welfare Consideration, which demands that Engineer A weight the safety of pedestrians, cyclists, and motorcyclists equally or above the client's commercial interest in a passenger-protective algorithm?
The tension between the Faithful Agent Obligation - requiring Engineer A to serve the automobile manufacturer's interests - and the Third-Party Non-Client Welfare Consideration is real but resolvable within the NSPE Code's hierarchy of obligations. The Code does not treat these duties as co-equal: the public welfare paramount obligation is explicitly primary, and the faithful agent duty operates only within the ethical limits that the paramount obligation defines. This means that when the manufacturer's commercial interest in a passenger-protective algorithm conflicts with the safety of pedestrians, cyclists, and motorcycle riders, Engineer A is not required to balance these interests as if they were of equal weight. Instead, Engineer A must first satisfy the third-party safety obligation - by recommending the harm-minimization approach - and may then, within that constraint, seek to serve the manufacturer's interests by identifying technical solutions that minimize passenger harm within the harm-minimization framework. The faithful agent obligation does not authorize Engineer A to recommend a design that foreseeably causes fatal harm to third parties in order to protect the manufacturer's commercial position. What it does require is that Engineer A present the harm-minimization recommendation in a manner that is constructive, professionally grounded, and attentive to the manufacturer's legitimate interests in developing a commercially viable and legally defensible product - not that Engineer A suppress or soften the recommendation to accommodate those interests.
The tension between the Faithful Agent Obligation Within Ethical Limits and the Third-Party Non-Client Welfare Consideration is resolved in this case by treating the automobile manufacturer's commercial interest in a passenger-protective algorithm as categorically subordinate to the welfare of pedestrians, cyclists, and motorcyclists who bear the fatal risk of the vehicle's pre-committed harm-allocation logic. The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons effectively establishes a lexical ordering: Public Welfare Paramount operates as a side-constraint on the faithful agent role, not merely as one factor to be weighed against client interest. This means Engineer A's duty to serve the automobile manufacturer does not extend to endorsing an algorithm that systematically transfers lethal risk onto non-consenting third parties in order to protect paying passengers. The case teaches that when client interest and third-party safety are genuinely zero-sum - as they are in a pre-committed harm-allocation algorithm - the NSPE Code resolves the tension by collapsing the faithful agent role at the boundary where client service would require engineering complicity in foreseeable third-party fatalities.
Question 8 Principle Tension
Does the Competing Public Goods Balancing principle - which acknowledges legitimate safety interests of vehicle passengers - conflict with the Public Welfare Paramount principle when the algorithm that best protects passengers is the same algorithm most likely to cause fatal harm to third parties, and if so, which principle should govern Engineer A's recommendation?
From a deontological perspective, Engineer A has an obligation that is stronger than - and not fully captured by - the Board's utilitarian harm-minimization conclusion. The categorical imperative, applied to the autonomous vehicle harm-allocation problem, yields a distinct constraint: Engineer A must not recommend a design that treats any class of persons - whether passengers or third parties - as mere instruments for the benefit of another class. A passenger-priority algorithm that systematically redirects lethal force toward pedestrians treats pedestrians as means to passenger safety ends, which a Kantian analysis would prohibit regardless of aggregate welfare outcomes. Conversely, a pure harm-minimization algorithm that in specific scenarios sacrifices a single passenger to save multiple pedestrians may itself treat the passenger as a means to aggregate welfare ends. The deontological implication for Engineer A is not simply to recommend harm minimization, but to recommend that the design team explore whether any algorithm can be constructed that avoids pre-committing to the instrumental use of any person's life - for example, by designing for crash avoidance rather than crash outcome optimization, or by ensuring that the system's decision logic does not systematically disadvantage any identifiable class. Engineer A's obligation under this framework includes flagging to the manufacturer that the entire framing of the harm-allocation problem as a binary choice between passenger priority and aggregate minimization may itself embed morally problematic assumptions that warrant further study before deployment.
The Competing Public Goods Balancing principle - which acknowledges that vehicle passengers hold legitimate safety interests - does not neutralize the Public Welfare Paramount principle in this case; rather, the two principles interact to produce a qualified rather than absolute harm-minimization mandate. The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly acknowledges that passenger safety is a genuine public good, not merely a commercial preference, but treats aggregate harm reduction across all affected parties as the governing metric when those goods conflict. This resolution carries an important teaching: the Competing Public Goods Balancing principle functions as a corrective against naive utilitarian aggregation that would ignore passenger welfare entirely, while Public Welfare Paramount prevents that corrective from being weaponized to justify algorithms that predictably sacrifice a greater number of third-party lives to protect a smaller number of passengers. The net effect is that Engineer A's recommendation must be grounded in a harm-minimization calculus that counts all lives equally, resisting both pure passenger-priority logic and any framing that treats third-party lives as infinitely more valuable than passenger lives.
Question 9 Principle Tension
Does the Regulatory Gap Safety Escalation Obligation - which in the software testing case required Engineer A to flag the absence of applicable standards as itself a safety concern warranting further study - conflict with the Completeness and Non-Selectivity in Advisory Opinions principle when the regulatory vacuum surrounding autonomous vehicle harm-allocation ethics means that any recommendation Engineer A makes will necessarily be incomplete, potentially leading to selective or premature guidance that could itself cause harm?
The interaction between the Autonomous System Moral Framework Transparency Obligation and the Regulatory Gap Safety Escalation Obligation - both activated by the absence of established national or industry standards governing autonomous vehicle harm-allocation ethics - produces a compounded disclosure duty that is stronger than either principle would generate in isolation. In the software testing context of BER Case 96-4, the regulatory gap triggered an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment. Transposed to the autonomous vehicle harm-allocation context, that same gap-triggered escalation obligation combines with the transparency obligation to require Engineer A not only to recommend further study but also to affirmatively disclose to the automobile manufacturer that the harm-allocation recommendation rests on a specific moral framework - utilitarian harm minimization - rather than on a settled engineering standard. The Completeness and Non-Selectivity in Advisory Opinions principle reinforces this synthesis: because any recommendation Engineer A makes in a regulatory vacuum will necessarily reflect contestable ethical assumptions, selective silence about those assumptions would itself be a form of incomplete and potentially misleading professional advice. The case therefore teaches that regulatory vacuums do not relieve disclosure obligations; they intensify them, because the engineer's judgment substitutes for the absent standard and must therefore be rendered fully transparent.
From a deontological perspective, does Engineer A have an absolute duty to recommend harm minimization for third parties regardless of the automobile manufacturer's commercial interests, and does this duty derive from the categorical imperative that engineers must never treat third-party lives as mere means to passenger safety ends?
From a deontological perspective, Engineer A has an obligation that is stronger than - and not fully captured by - the Board's utilitarian harm-minimization conclusion. The categorical imperative, applied to the autonomous vehicle harm-allocation problem, yields a distinct constraint: Engineer A must not recommend a design that treats any class of persons - whether passengers or third parties - as mere instruments for the benefit of another class. A passenger-priority algorithm that systematically redirects lethal force toward pedestrians treats pedestrians as means to passenger safety ends, which a Kantian analysis would prohibit regardless of aggregate welfare outcomes. Conversely, a pure harm-minimization algorithm that in specific scenarios sacrifices a single passenger to save multiple pedestrians may itself treat the passenger as a means to aggregate welfare ends. The deontological implication for Engineer A is not simply to recommend harm minimization, but to recommend that the design team explore whether any algorithm can be constructed that avoids pre-committing to the instrumental use of any person's life - for example, by designing for crash avoidance rather than crash outcome optimization, or by ensuring that the system's decision logic does not systematically disadvantage any identifiable class. Engineer A's obligation under this framework includes flagging to the manufacturer that the entire framing of the harm-allocation problem as a binary choice between passenger priority and aggregate minimization may itself embed morally problematic assumptions that warrant further study before deployment.
From a consequentialist perspective, does the Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons adequately account for the aggregate welfare calculus across all possible crash scenarios, including cases where passenger sacrifice might produce net societal harm through reduced adoption of safer autonomous vehicles overall?
The Competing Public Goods Balancing principle - which acknowledges that vehicle passengers hold legitimate safety interests - does not neutralize the Public Welfare Paramount principle in this case; rather, the two principles interact to produce a qualified rather than absolute harm-minimization mandate. The Board's conclusion that Engineer A must recommend minimizing harm to the least number of persons implicitly acknowledges that passenger safety is a genuine public good, not merely a commercial preference, but treats aggregate harm reduction across all affected parties as the governing metric when those goods conflict. This resolution carries an important teaching: the Competing Public Goods Balancing principle functions as a corrective against naive utilitarian aggregation that would ignore passenger welfare entirely, while Public Welfare Paramount prevents that corrective from being weaponized to justify algorithms that predictably sacrifice a greater number of third-party lives to protect a smaller number of passengers. The net effect is that Engineer A's recommendation must be grounded in a harm-minimization calculus that counts all lives equally, resisting both pure passenger-priority logic and any framing that treats third-party lives as infinitely more valuable than passenger lives.
From a virtue ethics standpoint, does Engineer A demonstrate the professional integrity and moral courage required of a virtuous engineer when actively expressing concerns about harm-allocation algorithms within a risk assessment team that may face significant commercial pressure to prioritize passenger safety over third-party welfare?
From a virtue ethics standpoint, Engineer A demonstrates the professional integrity and moral courage required of a virtuous engineer precisely by actively and unambiguously expressing concerns about harm-allocation algorithms within the risk assessment team, even when facing commercial pressure to prioritize passenger safety. Virtue ethics evaluates not only the content of Engineer A's recommendation but the manner and disposition with which it is made. A virtuous engineer in Engineer A's position would not merely file a technically correct recommendation and withdraw; he would engage substantively with the team's deliberations, articulate the moral stakes of the design decision in terms accessible to non-engineer stakeholders, and persist in raising concerns through appropriate channels if the initial recommendation is dismissed. The virtue of practical wisdom - phronesis - is particularly relevant here: it requires Engineer A to recognize that the harm-allocation problem is not purely technical, that the risk assessment team's composition and mandate may not be adequate to resolve the embedded ethical questions, and that recommending further interdisciplinary study before deployment is itself an expression of professional integrity rather than a failure to provide a definitive answer. A virtuous engineer does not manufacture false certainty about genuinely contested moral questions in order to satisfy a client's desire for a clean recommendation.
From a deontological perspective, does Engineer A's obligation to disclose the moral framework embedded in the autonomous vehicle's harm-allocation algorithm to the public constitute a perfect duty under professional ethics codes, and does the absence of applicable regulatory standards heighten rather than relieve that disclosure duty?
Beyond the Board's finding that Engineer A must recommend minimizing harm to the least number of persons, Engineer A bears an additional obligation to explicitly disclose to the automobile manufacturer that this recommendation is grounded in a utilitarian ethical framework rather than in any established regulatory or industry standard. Because no applicable national or industry standards governing autonomous vehicle harm-allocation decision logic currently exist, Engineer A cannot represent the harm-minimization recommendation as a technically mandated or universally accepted engineering norm. Presenting it as such would violate the completeness and non-selectivity obligation that governs Engineer A's advisory role. Engineer A must therefore clearly communicate to the automobile manufacturer that the recommendation reflects a specific moral philosophy - one that reasonable engineers and ethicists might contest - so that the manufacturer can make a genuinely informed deployment decision. This disclosure obligation is heightened, not relieved, by the regulatory standards vacuum, because the absence of external standards places the full burden of ethical transparency on Engineer A as the professional advisor.
Question 14 Counterfactual
If Engineer A had remained silent or provided only a partial assessment of the third-party harm risks within the risk assessment team, would the automobile manufacturer have had sufficient information to make an ethically informed deployment decision, and would Engineer A's silence have constituted a violation of the faithful agent obligation?
If Engineer A had remained silent or provided only a partial assessment of third-party harm risks within the risk assessment team, the automobile manufacturer would not have had sufficient information to make an ethically informed deployment decision, and Engineer A's silence would have constituted a violation of both the faithful agent obligation and the public welfare paramount obligation. The faithful agent obligation requires Engineer A to provide the manufacturer with complete, accurate, and professionally grounded information relevant to the design decision - including information that is commercially inconvenient. Partial disclosure that omits the third-party harm implications of a passenger-priority algorithm would deprive the manufacturer of the ability to make an informed choice about the ethical and legal risks it is assuming. Simultaneously, Engineer A's silence would violate the public welfare paramount obligation by allowing a design to proceed toward deployment without the safety concerns having been formally raised, documented, and considered. The Code provision at III.1.b. - requiring engineers to advise clients when a project will not be successful - applies by analogy: a harm-allocation algorithm that foreseeably causes fatal harm to third parties in a predictable class of scenarios is not a successful engineering outcome, and Engineer A is obligated to say so. Silence in the face of a known, foreseeable, and serious public safety risk is not a neutral act under the NSPE Code; it is a breach of the engineer's professional duty.
Question 15 Counterfactual
What if the automobile manufacturer had already established a firm design policy prioritizing passenger safety above all third-party considerations before Engineer A joined the risk assessment team - would Engineer A's ethical obligations shift from recommendation to escalation or refusal to certify the system?
If the automobile manufacturer had already established a firm design policy prioritizing passenger safety above all third-party considerations before Engineer A joined the risk assessment team, Engineer A's ethical obligations would shift materially - from recommendation toward escalation and, if necessary, refusal to certify. Under these circumstances, Engineer A's initial obligation to recommend the harm-minimization approach would remain, but its character would change: rather than being a prospective design input, it would function as a formal objection to an existing policy. Engineer A would be required to document that objection in writing, communicate it to the manufacturer's responsible decision-makers, and make clear that the existing passenger-priority policy creates foreseeable fatal risks to third parties that Engineer A regards as inconsistent with the public welfare paramount obligation. If the manufacturer declined to reconsider the policy after receiving this formal objection, Engineer A would face the question of whether to continue participating in the project. Continued participation in the design and certification of a system that Engineer A has formally identified as posing an unreasonable risk of fatal harm to third parties would be difficult to reconcile with the Code's prohibition on approving engineering documents not in conformity with sound engineering principles. Engineer A would therefore be obligated to decline to certify or approve the system, and to evaluate whether the severity and foreseeability of the third-party harm risk triggers any external reporting obligation under the public welfare paramount principle.
Question 16 Counterfactual
Had established national or industry standards governing autonomous vehicle harm-allocation decision logic existed at the time of Engineer A's assessment - analogous to the draft standards emerging in BER Case 96-4 - would Engineer A's obligation to recommend further study before deployment have been stronger, weaker, or qualitatively different in character?
The Board's harm-minimization conclusion, while sound as a first-order ethical directive, does not adequately account for the possibility that a technically superior mitigation option - such as a sensor-based dynamic crash evaluation system capable of real-time scenario assessment rather than pre-committed algorithmic harm-allocation logic - could dissolve or substantially reduce the binary ethical dilemma between passenger safety and third-party harm minimization. Engineer A's obligation to explore additional technical mitigation options before accepting the dilemma as irreducible is itself an ethical duty, not merely a technical preference. Analogous to the reasoning in BER Case 96-4, where Engineer A was obligated to recommend further study and additional testing before deployment of safety-critical software, Engineer A in the present case must recommend that the risk assessment team investigate whether the harm-allocation decision can be made dynamically rather than pre-committed, thereby potentially achieving better outcomes for all parties across a wider range of crash scenarios. Recommending harm minimization without first exhausting technically feasible alternatives that could reduce the need for any pre-committed harm allocation would itself be an incomplete discharge of Engineer A's professional competence and public welfare obligations. If such alternatives are found to be technically infeasible, Engineer A must document that finding transparently so that the manufacturer's deployment decision is fully informed.
Had established national or industry standards governing autonomous vehicle harm-allocation decision logic existed at the time of Engineer A's assessment - analogous to the draft standards emerging in BER Case 96-4 - Engineer A's obligation to recommend further study before deployment would have been qualitatively different in character, though not necessarily stronger in absolute terms. The existence of applicable standards would have provided Engineer A with an external, professionally validated benchmark against which to evaluate the manufacturer's proposed algorithm, reducing the degree to which Engineer A's recommendation rested on Engineer A's individual ethical judgment. This would have made the recommendation more defensible, more actionable, and more likely to be accepted by the manufacturer. However, the absence of such standards does not weaken Engineer A's substantive obligation; it merely changes its epistemic basis. In the regulatory vacuum that actually exists, Engineer A's obligation to recommend further study is grounded in the recognition - itself drawn from the BER Case 96-4 analogy - that the absence of applicable standards is itself a safety-relevant fact that the manufacturer must be made aware of before deployment. The regulatory gap heightens the disclosure obligation and strengthens the case for recommending further interdisciplinary study, because it means that no external body has yet validated any harm-allocation approach as meeting a minimum standard of public safety. Engineer A's recommendation in the absence of standards must therefore be more explicitly provisional, more clearly flagged as reflecting one among several defensible approaches, and more strongly oriented toward recommending that deployment await the development of at least preliminary industry consensus.
Question 17 Counterfactual
If Engineer A had proposed and the team had successfully identified a technical mitigation option - such as a sensor-based system capable of dynamically evaluating crash scenarios in real time rather than relying on pre-committed algorithmic harm-allocation logic - would the core ethical dilemma between passenger safety and third-party harm minimization have been dissolved, and what residual ethical obligations would Engineer A retain regarding transparency about the system's remaining limitations?
The Board's harm-minimization conclusion, while sound as a first-order ethical directive, does not adequately account for the possibility that a technically superior mitigation option - such as a sensor-based dynamic crash evaluation system capable of real-time scenario assessment rather than pre-committed algorithmic harm-allocation logic - could dissolve or substantially reduce the binary ethical dilemma between passenger safety and third-party harm minimization. Engineer A's obligation to explore additional technical mitigation options before accepting the dilemma as irreducible is itself an ethical duty, not merely a technical preference. Analogous to the reasoning in BER Case 96-4, where Engineer A was obligated to recommend further study and additional testing before deployment of safety-critical software, Engineer A in the present case must recommend that the risk assessment team investigate whether the harm-allocation decision can be made dynamically rather than pre-committed, thereby potentially achieving better outcomes for all parties across a wider range of crash scenarios. Recommending harm minimization without first exhausting technically feasible alternatives that could reduce the need for any pre-committed harm allocation would itself be an incomplete discharge of Engineer A's professional competence and public welfare obligations. If such alternatives are found to be technically infeasible, Engineer A must document that finding transparently so that the manufacturer's deployment decision is fully informed.
If Engineer A had proposed and the team had successfully identified a technical mitigation option - such as a sensor-based system capable of dynamically evaluating crash scenarios in real time rather than relying on pre-committed algorithmic harm-allocation logic - the core ethical dilemma between passenger safety and third-party harm minimization would be substantially but not fully dissolved. A dynamic real-time evaluation system would eliminate the most ethically troubling feature of pre-committed harm-allocation logic: the systematic, categorical pre-assignment of fatal risk to identifiable classes of persons based on their mode of transportation rather than on the actual circumstances of a specific crash. However, Engineer A would retain significant residual ethical obligations even if such a system were technically feasible. First, Engineer A would be obligated to assess and disclose the reliability limitations of the dynamic evaluation system - including sensor failure modes, edge cases where real-time evaluation is impossible, and the possibility that the system's dynamic decisions might themselves embed implicit harm-allocation biases through the weighting of its input variables. Second, Engineer A would be obligated to recommend that the dynamic system's decision logic be made transparent to consumers and regulators, since the ethical concerns about algorithmic opacity do not disappear merely because the algorithm operates in real time rather than through pre-commitment. Third, Engineer A would be obligated to recommend that the dynamic system undergo further study and testing before deployment, since the novelty of the technology means that its real-world performance across the full range of crash scenarios cannot be validated through design analysis alone. The identification of a technical mitigation option reduces but does not eliminate Engineer A's public safety obligations.
Rich Analysis Results
View ExtractionCausal-Normative Links 6
Recommend Additional Safety Testing
- Engineer A AV Further Study Recommendation Before Deployment Obligation
- Engineer A BER 96-4 Additional Testing Recommendation Obligation
- New Draft Standard Awareness Additional Testing Recommendation Obligation
- Autonomous Vehicle Further Study Recommendation Before Deployment Obligation
- Engineer A AV Further Study Recommendation Before Deployment Obligation
Actively Participate in Risk Assessment
- Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation
- Engineer A Autonomous Vehicle Risk Assessment Active Participation Obligation
- Autonomous Vehicle Risk Assessment Active Participation and Concern Expression Obligation
Prepare Transparent Technical Report
- Engineer A BER 96-4 Technical Report Preparation Obligation
- Autonomous Vehicle Harm Minimization Algorithm Completeness Disclosure Obligation
- Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation
- Engineer A AV Moral Framework Public Transparency Recommendation Obligation
- Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Safety-Critical Software Informed Employer Decision Enablement Obligation
Unambiguously Express Safety Concerns
- Autonomous Vehicle Risk Assessment Active Participation and Concern Expression Obligation
- Engineer A Autonomous Vehicle Risk Assessment Active Participation Obligation
- Engineer A AV Risk Assessment Third-Party Safety Consideration Obligation
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation
- Engineer A Autonomous Vehicle Do No Harm Obligation
- Engineer A BER 96-4 Public Welfare Paramount Safety-Critical Software Obligation
Explore Additional Technical Mitigation Options
- Engineer A AV Further Study Recommendation Before Deployment Obligation
- Autonomous Vehicle Further Study Recommendation Before Deployment Obligation
- Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation
- Technical Recommendation Business Pressure Non-Subordination Obligation
- Engineer A BER 96-4 Business Pressure Non-Subordination Obligation
Propose Further Study Before Deployment
- Engineer A AV Further Study Recommendation Before Deployment Obligation
- Engineer A Autonomous Vehicle Further Study Recommendation Obligation
- Autonomous Vehicle Further Study Recommendation Before Deployment Obligation
- Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Engineer A Autonomous Vehicle Do No Harm Obligation
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation
- Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation
- Engineer A Autonomous Vehicle Risk Assessment Active Participation Obligation
- Autonomous Vehicle Risk Assessment Active Participation and Concern Expression Obligation
- Technical Recommendation Business Pressure Non-Subordination Obligation
Question Emergence 17
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Precedent Case Principles Activated
Triggering Actions
- Prepare Transparent Technical Report
- Unambiguously Express Safety Concerns
Competing Warrants
- Autonomous System Moral Framework Transparency Obligation Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Algorithmic Harm Distribution Ethics in Autonomous Systems Completeness and Non-Selectivity in Advisory Opinions Invoked by Engineer A Risk Assessment Team
- Engineer A AV Moral Framework Public Transparency Recommendation Obligation Autonomous Vehicle Harm Minimization Algorithm Completeness Disclosure Obligation
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
Triggering Actions
- Actively Participate in Risk Assessment
- Unambiguously Express Safety Concerns
Competing Warrants
- Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role Third-Party Non-Client Welfare Consideration Invoked in Autonomous Vehicle Design
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
Triggering Actions
- Explore Additional Technical Mitigation Options
- Propose Further Study Before Deployment
Competing Warrants
- Competing Public Goods Balancing Invoked in Passenger vs. Third-Party Safety Trade-Off Public Welfare Paramount Invoked in Autonomous Vehicle Crash Algorithm Design
Triggering Events
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
- Precedent Case Principles Activated
- Draft Safety Standards Emerge
Triggering Actions
- Propose Further Study Before Deployment
- Prepare Transparent Technical Report
- Recommend Additional Safety Testing
Competing Warrants
- Regulatory Gap Safety Escalation Obligation Invoked in Software Testing Case Completeness and Non-Selectivity in Advisory Opinions Invoked by Engineer A Risk Assessment Team
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Algorithmic Ethics Gap Recognized
- Precedent Case Principles Activated
Triggering Actions
- Unambiguously Express Safety Concerns
- Propose Further Study Before Deployment
- Explore Additional Technical Mitigation Options
Competing Warrants
- Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation Engineer A AV Regulatory Standards Vacuum Escalation Permissibility Constraint
- Technical Recommendation Business Pressure Non-Subordination Obligation Engineer A AV Client Interest Third-Party Safety Priority Constraint
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation Engineer A AV Passenger Priority Algorithm Third-Party Fatal Harm Non-Subordination Constraint
- Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role Do No Harm Obligation Invoked by Engineer A in Autonomous Vehicle Case
Triggering Events
- Algorithmic Ethics Gap Recognized
- Unavoidable Crash Scenario Identified
- Precedent Case Principles Activated
Triggering Actions
- Prepare Transparent Technical Report
- Unambiguously Express Safety Concerns
- Actively Participate in Risk Assessment
Competing Warrants
- Engineer A AV Faithful Agent Informed Decision Enablement Obligation Informed Decision-Making Enablement Obligation Invoked for Automobile Manufacturer Client
- Completeness and Non-Selectivity in Advisory Opinions Invoked by Engineer A Risk Assessment Team Engineer A Autonomous Vehicle Risk Assessment Active Participation Obligation
- Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role Do No Harm Obligation Invoked by Engineer A in Autonomous Vehicle Case
Triggering Events
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
- Precedent Case Principles Activated
Triggering Actions
- Prepare Transparent Technical Report
- Unambiguously Express Safety Concerns
Competing Warrants
- Autonomous System Moral Framework Transparency Obligation Invoked in AV Design Informed Decision-Making Enablement Obligation Invoked for Automobile Manufacturer Client
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
- Precedent Case Principles Activated
Triggering Actions
- Actively Participate in Risk Assessment
- Unambiguously Express Safety Concerns
- Explore Additional Technical Mitigation Options
Competing Warrants
- Do No Harm Obligation Invoked by Engineer A in Autonomous Vehicle Case Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role
- Algorithmic Harm Distribution Ethics Invoked in Autonomous Vehicle Case Third-Party Non-Client Welfare Consideration Invoked in Autonomous Vehicle Case
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
Triggering Actions
- Explore Additional Technical Mitigation Options
- Propose Further Study Before Deployment
Competing Warrants
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation Engineer A AV Further Study Recommendation Before Deployment Obligation
- Competing Public Goods Balancing Invoked in Passenger vs. Third-Party Safety Trade-Off Public Welfare Paramount Invoked in Autonomous Vehicle Crash Algorithm Design
- Algorithmic Harm Distribution Ethics in Autonomous Systems Third-Party Non-Client Welfare Consideration in Autonomous System Design
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Precedent Case Principles Activated
Triggering Actions
- Actively Participate in Risk Assessment
- Explore Additional Technical Mitigation Options
- Propose Further Study Before Deployment
Competing Warrants
- Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Engineer A Autonomous Vehicle Do No Harm Obligation Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation
- Technical Recommendation Business Pressure Non-Subordination Obligation Engineer A AV Further Study Recommendation Before Deployment Obligation
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Algorithmic Ethics Gap Recognized
- Unavoidable Crash Scenario Identified
Triggering Actions
- Prepare Transparent Technical Report
- Propose Further Study Before Deployment
- Unambiguously Express Safety Concerns
Competing Warrants
- Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Third-Party Non-Client Welfare Consideration Invoked in Autonomous Vehicle Design Autonomous System Moral Framework Transparency Obligation Invoked in AV Design
- Engineer A AV Moral Framework Public Transparency Recommendation Obligation Engineer A AV Further Study Recommendation Before Deployment Obligation
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
Triggering Actions
- Recommend Additional Safety Testing
- Unambiguously Express Safety Concerns
- Propose Further Study Before Deployment
Competing Warrants
- Engineer A Autonomous Vehicle Do No Harm Obligation Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation Technical Recommendation Business Pressure Non-Subordination Obligation
- Engineer A AV Passenger Priority Algorithm Third-Party Fatal Harm Non-Subordination Constraint Graduated Internal Escalation Before External Reporting Invoked in Software Testing Case
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Algorithmic Ethics Gap Recognized
- Unavoidable Crash Scenario Identified
Triggering Actions
- Actively Participate in Risk Assessment
- Unambiguously Express Safety Concerns
- Prepare Transparent Technical Report
Competing Warrants
- Engineer A AV Faithful Agent Informed Decision Enablement Obligation Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role
- Engineer A Autonomous Vehicle Do No Harm Obligation Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation
- Engineer A AV Regulatory Standards Vacuum Escalation Permissibility Constraint Engineer A AV Client Interest Third-Party Safety Priority Constraint
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Precedent Case Principles Activated
Triggering Actions
- Unambiguously Express Safety Concerns
- Actively Participate in Risk Assessment
Competing Warrants
- Engineer A Autonomous Vehicle Risk Assessment Active Participation Obligation Technical Recommendation Independence from Business Considerations Invoked in Software Testing Case
- Active Risk Assessment Team Participation Obligation Invoked by Engineer A Engineer A BER 96-4 Business Pressure Non-Subordination Obligation
- Do No Harm Obligation in Professional Engineering Services Faithful Agent Obligation Within Ethical Limits Invoked for Engineer A Consultant Role
Triggering Events
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
- Precedent Case Principles Activated
Triggering Actions
- Prepare Transparent Technical Report
Competing Warrants
- Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation Engineer A AV Harm Allocation Moral Framework Non-Deception Public Disclosure Constraint
- Autonomous System Moral Framework Transparency Obligation Invoked in AV Design Engineer A AV Regulatory Standards Vacuum Heightened Disclosure Constraint
- Completeness and Non-Selectivity in Advisory Opinions Invoked by Engineer A Risk Assessment Team Informed Decision-Making Enablement Obligation Invoked for Automobile Manufacturer Client
Triggering Events
- Autonomous Vehicle AV OS Development Initiated
- Algorithmic Ethics Gap Recognized
- Draft Safety Standards Emerge
- Precedent Case Principles Activated
Triggering Actions
- Propose Further Study Before Deployment
- Prepare Transparent Technical Report
- Recommend Additional Safety Testing
Competing Warrants
- Engineer A AV Further Study Recommendation Before Deployment Obligation New Draft Standard Awareness Additional Testing Recommendation Obligation
- Engineer A AV Regulatory Standards Vacuum Escalation Permissibility Constraint Engineer A BER 96-4 Emerging Standard Technical Report Disclosure Constraint
- Autonomous Vehicle Further Study Recommendation Before Deployment Obligation Technical Recommendation Business Pressure Non-Subordination Obligation
Triggering Events
- Unavoidable Crash Scenario Identified
- Algorithmic Ethics Gap Recognized
- Autonomous Vehicle AV OS Development Initiated
- Precedent Case Principles Activated
Triggering Actions
- Explore Additional Technical Mitigation Options
- Prepare Transparent Technical Report
- Unambiguously Express Safety Concerns
- Propose Further Study Before Deployment
Competing Warrants
- Autonomous Vehicle Harm Minimization Algorithm Completeness Disclosure Obligation Engineer A AV Moral Framework Public Transparency Recommendation Obligation
- Autonomous Vehicle Third-Party Harm Minimization Safety Consideration Obligation Engineer A AV Faithful Agent Informed Decision Enablement Obligation
- Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation
Resolution Patterns 18
Determinative Principles
- Public Welfare Paramount as governing metric in aggregate harm conflicts
- Competing Public Goods Balancing as corrective against naive utilitarian aggregation
- Equal counting of all lives in harm-minimization calculus
Determinative Facts
- Passenger safety is a genuine public good, not merely a commercial preference, because passengers are also members of the public whose welfare the Code protects
- The algorithm that best protects passengers is the same algorithm most likely to cause fatal harm to a greater number of third parties, making the conflict direct and quantifiable
- The Board's conclusion mandates minimizing harm to the least number of persons, implying an aggregate cross-party calculus rather than categorical passenger exclusion
Determinative Principles
- Autonomous System Moral Framework Transparency Obligation intensified by regulatory vacuum
- Regulatory Gap Safety Escalation Obligation requiring the gap itself to be flagged as a safety concern
- Completeness and Non-Selectivity in Advisory Opinions preventing selective silence about contestable ethical assumptions
Determinative Facts
- No established national or industry standards govern autonomous vehicle harm-allocation ethics at the time of Engineer A's assessment, creating a regulatory vacuum in which Engineer A's judgment substitutes for an absent standard
- The harm-minimization recommendation rests on a utilitarian moral framework rather than a settled engineering standard, making the ethical assumptions contestable and therefore requiring disclosure
- The software testing precedent in BER Case 96-4 established that a regulatory gap triggers an obligation to flag the absence of standards as itself a safety concern and to recommend further study before deployment
Determinative Principles
- Virtue ethics / professional integrity and moral courage
- Practical wisdom (phronesis) as a guide to recognizing limits of technical mandate
- Prohibition on manufacturing false certainty about genuinely contested moral questions
Determinative Facts
- Engineer A operates within a risk assessment team subject to commercial pressure to prioritize passenger safety
- The harm-allocation problem is not purely technical and the team's composition may be inadequate to resolve embedded ethical questions
- Recommending further interdisciplinary study before deployment is itself an expression of professional integrity, not a failure to answer
Determinative Principles
- Faithful Agent Obligation — complete and accurate disclosure of professionally relevant information, including commercially inconvenient information
- Public Welfare Paramount Obligation — formal raising, documentation, and consideration of known foreseeable public safety risks
- Silence as a non-neutral act constituting an affirmative breach of professional duty
Determinative Facts
- Partial disclosure omitting third-party harm implications would deprive the manufacturer of the ability to make an informed ethical and legal risk decision
- A harm-allocation algorithm that foreseeably causes fatal harm to third parties in a predictable class of scenarios is not a successful engineering outcome
- Engineer A's silence would allow a design to proceed toward deployment without safety concerns being formally raised or documented
Determinative Principles
- Escalation obligation when prospective design input becomes formal objection to existing policy
- Prohibition on approving engineering documents not in conformity with sound engineering principles
- Public welfare paramount obligation as a potential trigger for external reporting when internal escalation fails
Determinative Facts
- The manufacturer had already established a firm passenger-priority policy before Engineer A joined the team, converting Engineer A's role from prospective recommender to formal objector
- The existing policy creates foreseeable fatal risks to third parties that Engineer A regards as inconsistent with the public welfare paramount obligation
- Continued participation in design and certification of a system Engineer A has formally identified as posing unreasonable fatal risk to third parties is difficult to reconcile with the Code
Determinative Principles
- Technical mitigation substantially but not fully dissolves the pre-commitment harm-allocation dilemma
- Residual disclosure obligation regarding reliability limitations, failure modes, and implicit biases of the dynamic system
- Transparency and further study obligations persist regardless of whether the algorithm operates in real time or through pre-commitment
Determinative Facts
- A dynamic real-time evaluation system eliminates the most ethically troubling feature of pre-committed logic — categorical pre-assignment of fatal risk to identifiable classes of persons
- The dynamic system retains sensor failure modes, edge cases where real-time evaluation is impossible, and potential implicit harm-allocation biases in input variable weighting
- The novelty of the technology means real-world performance across the full range of crash scenarios cannot be validated through design analysis alone
Determinative Principles
- Public Welfare Paramount obligation is explicitly primary in the NSPE Code hierarchy and is not co-equal with the faithful agent duty
- Faithful Agent Obligation operates only within the ethical limits defined by the paramount obligation — it does not authorize recommendations that foreseeably cause fatal third-party harm
- Competing Public Goods Balancing acknowledges passenger safety as a legitimate interest but subordinates it to the paramount obligation when the passenger-protective algorithm is the same algorithm most likely to cause fatal third-party harm
Determinative Facts
- The manufacturer's commercial interest in a passenger-priority algorithm directly conflicts with the safety of pedestrians, cyclists, and motorcycle riders, making the conflict between Q6 obligations concrete rather than hypothetical
- The NSPE Code establishes an explicit hierarchy in which public welfare is paramount, meaning the board did not need to treat the faithful agent duty and the third-party welfare consideration as equal weights requiring balancing
- Engineer A can serve the manufacturer's legitimate interests constructively — by identifying technical solutions that minimize passenger harm within the harm-minimization framework — without suppressing or softening the safety recommendation
Determinative Principles
- Public Welfare Paramount — the obligation to hold paramount the safety, health, and welfare of the public overrides competing commercial or passenger-protective interests
- Aggregate Harm Minimization — when lives cannot all be protected, the ethical directive is to minimize the total number of persons harmed
- Third-Party Non-Client Welfare Consideration — pedestrians, cyclists, and motorcyclists who are not parties to the client relationship are nonetheless owed professional protection
Determinative Facts
- The autonomous vehicle's harm-allocation algorithm must make a pre-committed decision about whose safety to prioritize in unavoidable crash scenarios
- Third parties such as pedestrians, cyclists, and motorcyclists are foreseeably at risk from a passenger-prioritizing algorithm
- Engineer A is serving as a professional advisor on a risk assessment team with direct influence over the algorithm's design logic
Determinative Principles
- Completeness and Non-Selectivity in Advisory Opinions — Engineer A must not present a recommendation grounded in a specific moral philosophy as though it were a technically mandated or universally accepted engineering norm
- Informed Decision-Making Enablement — the manufacturer cannot give genuine informed consent to an embedded ethical framework it does not know it is adopting
- Regulatory Gap Transparency — the absence of applicable national or industry standards heightens rather than relieves the professional advisor's obligation to disclose the normative basis of any recommendation
Determinative Facts
- No applicable national or industry standards governing autonomous vehicle harm-allocation decision logic currently exist
- The harm-minimization recommendation is grounded in a utilitarian ethical framework, not in any established regulatory or technical standard
- Engineer A occupies an advisory role in which the manufacturer will rely on Engineer A's professional judgment to make a deployment decision affecting the public
Determinative Principles
- Public Welfare Paramount Duty Applies Equally to Consultants — the NSPE Code's paramount obligation to public safety is not diminished by the absence of a direct employment relationship
- Graduated Internal Escalation Before External Action — Engineer A must pursue documented internal escalation through the organizational hierarchy before considering withdrawal or external reporting
- Refusal to Certify as a Professional Threshold — if internal escalation fails and Engineer A cannot professionally certify the system as consistent with public welfare obligations, continued participation may constitute implicit endorsement of a harmful design
Determinative Facts
- The automobile manufacturer may override Engineer A's harm-minimization recommendation and program the vehicle to prioritize passenger safety above third-party welfare
- Engineer A is engaged as a consultant rather than a direct employee, raising the question of whether the consultant relationship reduces the scope of professional ethical duties
- A passenger-prioritizing algorithm foreseeably causes fatal injury to third parties who are not parties to the client relationship
Determinative Principles
- Technical Mitigation Before Ethical Dilemma Acceptance — Engineer A has a professional competence obligation to investigate whether the binary dilemma is technically irreducible before recommending a pre-committed harm-allocation rule
- Analogical Reasoning from BER Case 96-4 — the obligation to recommend further study before deployment of safety-critical systems applies equally when the ethical dilemma itself may be dissolved by superior technical alternatives
- Completeness of Professional Recommendation — recommending harm minimization without first exhausting technically feasible alternatives that could reduce the need for any pre-committed allocation is itself an incomplete discharge of Engineer A's public welfare and professional competence obligations
Determinative Facts
- Sensor-based dynamic crash evaluation systems capable of real-time scenario assessment may represent a technically superior alternative to pre-committed algorithmic harm-allocation logic
- The binary ethical dilemma between passenger safety and third-party harm minimization may be substantially reduced or dissolved if dynamic assessment is technically feasible
- No finding has been made that dynamic mitigation alternatives are technically infeasible, meaning the dilemma has not yet been established as irreducible
Determinative Principles
- Completeness and Non-Selectivity in Advisory Opinions — Engineer A must affirmatively disclose that the harm-minimization recommendation reflects a specific moral philosophy and that alternative frameworks exist and yield different algorithmic outcomes
- Informed Consent to Embedded Ethical Framework — the manufacturer cannot make a genuinely informed deployment decision without knowing that the selection among ethical frameworks is not a purely technical determination
- Deontological Challenge to Utilitarian Harm Allocation — a deontological framework would prohibit the vehicle from actively redirecting harm toward any third party regardless of aggregate outcome, representing a defensible alternative that Engineer A must surface rather than suppress
Determinative Facts
- The Board's harm-minimization conclusion implicitly adopts a utilitarian aggregate harm-minimization calculus without acknowledging it as one among several defensible moral philosophies
- Alternative ethical frameworks — particularly deontological approaches — yield materially different algorithmic outcomes and are not merely academic alternatives but practically consequential design choices
- Full public transparency about the algorithm's moral logic could expose the manufacturer to legal liability or competitive disadvantage that the client has not knowingly accepted
Determinative Principles
- Public Welfare Paramount principle — protection from material deception about safety-affecting products
- Autonomous System Moral Framework Transparency Obligation — users and third parties have a legitimate interest in knowing pre-committed decision logic governing their fate
- Regulatory Gap Safety Escalation — absence of external regulatory mandate heightens rather than relieves professional disclosure duty
Determinative Facts
- No applicable regulatory or industry standards govern autonomous vehicle harm-allocation decision logic at the time of Engineer A's assessment
- The harm-allocation algorithm pre-commits to outcomes on behalf of users who cannot intervene in real time, removing informed consent from the moment of consequence
- BER Case 96-4 established analogous precedent that a regulatory vacuum increases rather than diminishes the engineer's independent disclosure obligations
Determinative Principles
- Graduated Internal Escalation Before External Reporting — Engineer A must formally document disagreement in writing before external action is warranted
- Code prohibition on approving engineering documents not conforming to sound safety principles — Engineer A must decline to certify a passenger-priority system if it crosses the threshold of unjustifiable third-party risk
- Public Welfare Paramount obligation is not discharged by internal voicing of concern alone when that concern is overridden and a harmful design proceeds
Determinative Facts
- The manufacturer has overridden Engineer A's harm-minimization recommendation and elected a passenger-priority algorithm that foreseeably creates fatal risk to identifiable third-party classes
- Engineer A's initial recommendation has already been made, meaning residual obligations — not initial advisory duties — are now operative
- The passenger-priority algorithm creates foreseeable fatal risk to pedestrians, cyclists, and motorcycle riders as identifiable third-party classes, not merely speculative harm
Determinative Principles
- Public Welfare Paramount obligation applies with equal force to consultants and employees — the employment relationship does not define the scope of the safety duty
- Professional independence of the consultant relationship strengthens rather than weakens the obligation to provide honest, complete, and unvarnished safety assessments
- Consultant status compresses the internal escalation sequence because fewer organizational tiers exist through which concerns must be pressed before the duty is satisfied
Determinative Facts
- Engineer A is engaged as a consultant rather than a direct employee, meaning the client specifically contracted for Engineer A's independent professional judgment
- A consultant has fewer internal escalation tiers available than an embedded employee, meaning formal documentation and direct communication to responsible decision-makers more rapidly satisfies the escalation component
- Engineer A's silence or partial assessment (Q14) would have deprived the manufacturer of the complete information needed for an ethically informed deployment decision, constituting a faithful agent violation regardless of employment status
Determinative Principles
- Categorical imperative — Engineer A must not recommend a design that treats any class of persons as mere instruments for the benefit of another class, regardless of aggregate welfare outcomes
- Deontological constraint is stronger than and not fully captured by the utilitarian harm-minimization conclusion — it prohibits systematic pre-commitment to instrumental use of any person's life
- The framing of the harm-allocation problem as a binary choice between passenger priority and aggregate minimization may itself embed morally problematic assumptions that warrant further study before deployment
Determinative Facts
- A passenger-priority algorithm systematically redirects lethal force toward pedestrians, treating them as means to passenger safety ends in a manner the categorical imperative prohibits regardless of aggregate welfare outcomes
- A pure harm-minimization algorithm that sacrifices a single passenger to save multiple pedestrians may itself treat the passenger as a means to aggregate welfare ends, meaning neither binary option fully satisfies deontological constraints
- Technical alternatives — such as crash avoidance design rather than crash outcome optimization — may exist that avoid pre-committing to the instrumental use of any person's life, and Engineer A has an obligation to flag this possibility to the manufacturer
Determinative Principles
- Regulatory gap as itself a safety-relevant fact requiring disclosure
- External standards as an epistemic basis that changes the character but not the strength of the recommendation obligation
- Provisional and explicitly flagged guidance required in the absence of industry consensus
Determinative Facts
- No applicable national or industry standards governing autonomous vehicle harm-allocation decision logic existed at the time of Engineer A's assessment
- The absence of applicable standards means no external body has validated any harm-allocation approach as meeting a minimum public safety standard
- The BER Case 96-4 analogy establishes that a regulatory gap is itself a safety-relevant fact the manufacturer must be informed of before deployment
Determinative Principles
- Public Welfare Paramount as lexical side-constraint on faithful agent role
- Faithful Agent Obligation Within Ethical Limits
- Third-Party Non-Client Welfare Consideration
Determinative Facts
- The harm-allocation algorithm is pre-committed, meaning risk transfer to third parties is systematic and foreseeable rather than incidental
- Third-party pedestrians, cyclists, and motorcyclists are non-consenting bearers of the fatal risk created by the algorithm
- The client-versus-third-party conflict is genuinely zero-sum: protecting passengers via this algorithm necessarily increases lethal risk to others
Decision Points
View ExtractionShould Engineer A actively participate in the risk assessment team and formally express safety concerns about the harm-allocation algorithm — recommending further study before deployment — or should he limit his involvement to completing the technical evaluation without escalating those concerns?
- Formally Document Concerns, Recommend Further Study
- Complete Evaluation Without Escalating Concerns
- Flag Concerns Verbally, Defer Written Escalation
Should Engineer A explicitly disclose in the risk assessment report that the harm-minimization recommendation is grounded in a utilitarian ethical framework and recommend pre-sale public disclosure to consumers, or should Engineer A present the recommendation without labeling its philosophical basis?
- Disclose Ethical Framework Publicly in Report
- Present Both Frameworks Without Endorsing Either
- Disclose Framework Internally, Recommend Ethics Review
Should Engineer A formally notify the manufacturer's responsible decision-makers of the foreseeable fatal risk and — if unresolved — decline to certify the passenger-priority system, or should Engineer A document the disagreement without further escalation, or recommend an independent ethics review as an alternative to certification refusal?
- Formally Notify Manufacturer of Fatal Risk
- Document Disagreement, Treat Escalation as Complete
- Recommend Independent Ethics Review Before Deployment
Should Engineer A formally advocate within the risk assessment team that the harm-allocation algorithm minimize harm to the least number of persons — even under commercial pressure to prioritize passenger safety — or should Engineer A defer to the manufacturer's preferred passenger-priority framework?
- Actively Advocate Harm-Minimization in Writing
- Present Options and Defer to Team
- Accept Passenger-Priority as Legitimate Policy
Should Engineer A fully disclose to the automobile manufacturer that the harm-minimization recommendation reflects a utilitarian moral philosophy and recommend pre-sale consumer disclosure, partially disclose only to the manufacturer without advocating for consumer transparency, or present the recommendation as professional judgment without labeling its philosophical basis?
- Disclose Utilitarian Basis to Manufacturer Fully
- Disclose Internally, Limit Consumer Transparency
- Frame as Professional Judgment, Omit Label
Should Engineer A formally document the safety disagreement and decline to certify the passenger-priority system unless the public safety conflict is resolved, continue technical participation while documenting the concern, or withdraw from the engagement without formal certification refusal?
- Decline to Certify Passenger-Priority System
- Document Concern, Continue Technical Participation
- Withdraw Silently Without Formal Documentation
Case Narrative
Phase 4 narrative construction results for Case 165
Opening Context
You are Engineer A, a specialized software engineer on an Autonomous Vehicle Risk Assessment Team, where your expertise in safety-critical systems has earned you a seat at a table where the stakes extend far beyond code and compliance. Your team is navigating a precarious gap between emerging safety standards and their formal adoption — a space where technical decisions carry profound ethical weight and where the line between "sufficiently tested" and "safe enough" is anything but clear. You have been asked to deliver a harm allocation recommendation that will determine whether costly additional testing is warranted, placing you at the intersection of engineering judgment, organizational pressure, and public safety responsibility.
Characters (5)
A professional engineer who designed and tested specialized software for public-safety-critical facilities and was placed in the ethically precarious position of recommending whether costly additional testing was necessary to meet emerging safety standards.
- To provide an honest, technically grounded recommendation that upholds public safety and professional integrity, even when doing so conflicts with the financial interests of the employing software company and its clients.
- To bring a commercially viable and legally defensible autonomous vehicle to market while managing reputational, regulatory, and liability risks associated with algorithmic decisions that directly determine human harm outcomes.
- To fulfill paramount public safety obligations by ensuring that algorithmic crash outcome logic is rigorously evaluated, transparently documented, and does not unjustly prioritize passenger safety over vulnerable third-party road users.
The automobile manufacturer retains Engineer A as a consultant and has assembled an engineering risk assessment team to evaluate scenarios for a driverless/autonomous vehicle operating system under development, including crash outcome decision logic with direct public safety implications for third parties.
Designed specialized software for public-safety-critical facilities, conducted extensive testing, became aware of new draft standards the software might not meet, and was asked by the company to recommend whether additional costly testing was required.
A software development firm that employs Engineer A to produce safety-critical systems and faces competing pressures between client satisfaction and cost containment on one side, and genuine software safety assurance on the other.
- To protect the company's financial position and client relationships while avoiding the legal, ethical, and reputational consequences of deploying software that fails to meet evolving public-safety standards.
The automobile manufacturer in the present case that employs or retains Engineer A as part of an engineering risk management team to evaluate the autonomous vehicle operating system, bearing authority over deployment decisions and subject to Engineer A's paramount public safety obligations.
States (10)
Event Timeline (23)
| # | Event | Type |
|---|---|---|
| 1 | An engineer faces a complex ethical dilemma involving the design of an autonomous system, where decisions must be made about how potential harms and safety responsibilities are allocated among stakeholders. This foundational situation establishes the core tension between technical capability, public safety, and professional obligation. | state |
| 2 | The engineer formally recommends that the autonomous system undergo additional rounds of safety testing before any further development or deployment decisions are made. This recommendation reflects the engineer's professional duty to ensure that potential failure modes are thoroughly identified and addressed before the system can pose risks to the public. | action |
| 3 | The engineer prepares a comprehensive and candid technical report that openly documents the system's known limitations, uncertainties, and safety-related findings. By prioritizing transparency over convenience, this report ensures that all relevant parties have accurate information needed to make informed decisions about the system's future. | action |
| 4 | The engineer takes an active and substantive role in the formal risk assessment process, contributing technical expertise to evaluate the likelihood and severity of potential system failures. This engagement demonstrates the engineer's commitment to ensuring that risk evaluations are grounded in rigorous analysis rather than assumptions or commercial pressures. | action |
| 5 | The engineer clearly and directly communicates identified safety concerns to supervisors, clients, or other decision-makers, leaving no ambiguity about the nature or seriousness of the risks involved. This decisive action upholds the engineer's ethical obligation to prioritize public safety even when doing so may create professional friction or delay project timelines. | action |
| 6 | The engineer proactively investigates and proposes additional technical measures that could reduce or eliminate the identified safety risks within the autonomous system. This constructive approach demonstrates that raising safety concerns is paired with a genuine effort to find workable engineering solutions rather than simply halting progress. | action |
| 7 | The engineer formally advocates for delaying deployment of the autonomous system until further research and study can adequately address the unresolved safety questions. This recommendation prioritizes long-term public welfare over short-term project milestones, reflecting the core principle that engineers must not approve systems whose safety has not been sufficiently validated. | action |
| 8 | A critical discovery is made that specific software components within the autonomous system directly govern safety-critical functions, meaning that any errors or failures in this code could result in serious harm. This identification significantly escalates the ethical stakes of the case, as it establishes that the system's risks are not merely theoretical but are tied to concrete, high-consequence operational scenarios. | automatic |
| 9 | Draft Safety Standards Emerge | automatic |
| 10 | Financial Pressure on Testing | automatic |
| 11 | Autonomous Vehicle AV OS Development Initiated | automatic |
| 12 | Unavoidable Crash Scenario Identified | automatic |
| 13 | Precedent Case Principles Activated | automatic |
| 14 | Algorithmic Ethics Gap Recognized | automatic |
| 15 | Tension between Engineer A AV Risk Assessment Team Harm Minimization Participation Obligation / Autonomous Vehicle Further Study Recommendation Before Deployment Obligation and Engineer A AV Client Interest Third-Party Safety Priority Constraint | automatic |
| 16 | Tension between Autonomous Vehicle Harm Minimization Algorithm Completeness Disclosure Obligation / Autonomous Vehicle Moral Framework Public Transparency Disclosure Obligation and Engineer A AV Regulatory Standards Vacuum Escalation Permissibility Constraint | automatic |
| 17 | How should Engineer A discharge his obligations as a member of the automobile manufacturer's risk assessment team when the crash-avoidance algorithm's harm-distribution logic raises unresolved ethical and safety questions — specifically, whether to actively express concerns and recommend further study before deployment, or to defer to the team's commercial orientation and provide a narrower assessment? | decision |
| 18 | Given that no applicable national or industry standards govern autonomous vehicle harm-allocation decision logic, must Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than a technically mandated norm — and must Engineer A further recommend that the manufacturer publicly disclose the algorithm's embedded moral framework to prospective consumers before deployment? | decision |
| 19 | If the automobile manufacturer overrides Engineer A's harm-minimization recommendation and proceeds with a passenger-priority algorithm that foreseeably creates fatal risk for pedestrians, cyclists, and motorcycle riders, what actions must Engineer A take — and does Engineer A's consultant status affect the scope or sequence of those obligations? | decision |
| 20 | Should Engineer A recommend that the autonomous vehicle's operating system minimize harm to the least number of persons, and actively express that concern within the risk assessment team even under commercial pressure to prioritize passenger safety? | decision |
| 21 | Must Engineer A affirmatively disclose to the automobile manufacturer that the harm-minimization recommendation is grounded in a utilitarian ethical framework rather than an established regulatory or industry standard, and must Engineer A further recommend that the manufacturer publicly disclose the vehicle's embedded ethical decision logic to consumers before deployment? | decision |
| 22 | If the automobile manufacturer overrides Engineer A's harm-minimization recommendation and programs the vehicle to prioritize passenger safety above third-party welfare, what actions must Engineer A take — and does Engineer A's consultant status alter the scope or sequence of those obligations? | decision |
| 23 | That being said, to address the specific question posed in the case, Engineer A has an obligation to state that the prime ethical obligation of the vehicle operation is to minimize harm to affect the | outcome |
Decision Moments (6)
- Actively participate in all risk assessment team deliberations, formally document and unambiguously express concerns about the harm-allocation algorithm's third-party safety implications in writing, recommend that the manufacturer commission further interdisciplinary study — including ethical framework analysis and technical mitigation investigation — before deploying the operating system Actual outcome
- Participate in the risk assessment team's technical evaluation, raise third-party harm concerns verbally during team deliberations, and provide a written recommendation that identifies the harm-minimization approach as preferable — without separately recommending that deployment be delayed for further study, on the ground that the team's collective judgment and the manufacturer's business timeline should govern the deployment decision once the technical recommendation has been delivered
- Participate in the risk assessment team's evaluation, recommend harm minimization as the preferred algorithm design, and separately recommend that the manufacturer explore technical mitigation options — such as dynamic real-time crash evaluation systems — as a means of reducing the need for pre-committed harm-allocation logic, framing further study as a technical improvement opportunity rather than a deployment prerequisite
- Explicitly identify in the written risk assessment report that the harm-minimization recommendation is grounded in a utilitarian ethical framework, present the deontological alternative framework and its different algorithmic implications with equal completeness, and include a specific advisory that the manufacturer implement pre-sale public disclosure of the vehicle's harm-allocation decision logic before deployment Actual outcome
- Present both the passenger-priority and harm-minimization frameworks objectively in the risk assessment report — including their respective advantages and disadvantages — without characterizing either as utilitarian or deontological, and recommend that the manufacturer consult legal counsel and ethics advisors regarding consumer disclosure obligations, treating the philosophical labeling and public disclosure questions as outside the scope of the engineering risk assessment mandate
- Identify the philosophical basis of the harm-minimization recommendation in the internal risk assessment report delivered to the manufacturer, recommend that the manufacturer seek interdisciplinary ethics review before finalizing the algorithm design, but decline to recommend specific consumer-facing public disclosure on the ground that disclosure strategy is a legal and business decision within the manufacturer's exclusive authority as client
- Formally document the safety disagreement in writing addressed to the manufacturer's responsible decision-makers, clearly stating that the passenger-priority algorithm creates foreseeable fatal risk to third parties inconsistent with the public welfare paramount obligation; assess whether the system crosses the certification threshold under Code II.1.b and decline to certify if it does; and evaluate whether the severity and foreseeability of third-party harm triggers external reporting obligations if internal escalation fails Actual outcome
- Formally document the safety disagreement in writing, deliver it to the manufacturer's project lead, and — upon being overruled — treat the internal escalation obligation as discharged given the compressed escalation sequence available in a consultant relationship; continue participating in the project in an advisory capacity without certifying the system, on the ground that declining to certify without an applicable regulatory standard to anchor the refusal would exceed the scope of the consultant's professional mandate
- Document the safety disagreement in the final risk assessment report, recommend that the manufacturer obtain an independent ethics and safety review of the passenger-priority algorithm before deployment, and withdraw from the consulting engagement if the manufacturer proceeds without that review — treating withdrawal as the appropriate professional response that preserves Engineer A's integrity without triggering external reporting obligations that the NSPE Code reserves for more severe and imminent public safety threats
- Formally recommend that the harm-allocation algorithm minimize harm to the least number of persons, actively express this position within the risk assessment team, document the recommendation in writing, and propose further interdisciplinary study and exploration of dynamic real-time mitigation alternatives before deployment Actual outcome
- Present the harm-minimization approach as one among several technically defensible design options, defer to the risk assessment team's collective judgment on which framework to adopt, and limit Engineer A's formal output to a balanced technical summary of competing approaches without a personal recommendation
- Recommend harm minimization internally within the risk assessment team but accept the manufacturer's passenger-priority preference as a legitimate design policy choice within the manufacturer's authority, confining Engineer A's role to optimizing the passenger-priority algorithm's technical implementation rather than contesting the underlying policy
- Explicitly disclose to the automobile manufacturer in the technical report that the harm-minimization recommendation reflects a utilitarian moral philosophy rather than an established engineering standard, identify deontological and other alternative frameworks that yield different outcomes, and affirmatively recommend that the manufacturer implement pre-sale consumer disclosure of the vehicle's harm-allocation decision logic as a condition of ethically responsible deployment Actual outcome
- Disclose the philosophical basis of the harm-minimization recommendation to the manufacturer's engineering and legal teams as part of the confidential consulting deliverable, but limit the consumer disclosure recommendation to a general advisory that the manufacturer consult legal counsel about disclosure obligations, leaving the public transparency decision to the manufacturer's business judgment
- Present the harm-minimization recommendation as Engineer A's professional judgment grounded in the NSPE Code's public welfare paramount obligation without characterizing it as utilitarian or labeling its philosophical foundations, on the basis that the Code itself — rather than a contested moral philosophy — provides the normative authority for the recommendation, and defer consumer disclosure questions to the manufacturer and its regulatory counsel
- Formally document the safety disagreement in writing to the manufacturer's responsible decision-makers, decline to certify or approve the passenger-priority system if it cannot be reconciled with the public welfare paramount obligation, and evaluate whether the foreseeability and severity of third-party fatal harm triggers an external reporting obligation beyond the consulting engagement Actual outcome
- Document the safety concern in the consulting deliverable, communicate the disagreement verbally to the manufacturer's project lead, and continue participating in the technical optimization of the passenger-priority system while treating the manufacturer's policy override as a legitimate business decision within the client's authority — on the basis that Engineer A's professional duty is satisfied by having raised the concern and that the manufacturer bears ultimate design responsibility
- Withdraw from the consulting engagement upon the manufacturer's override without formal written documentation of the specific certification threshold crossed, on the basis that the consultant relationship does not obligate Engineer A to pursue multi-tier internal escalation through an organization in which Engineer A holds no employment standing, and that withdrawal itself constitutes a sufficient professional signal of non-endorsement
Sequential action-event relationships. See Analysis tab for action-obligation links.
- Recommend Additional Safety Testing Prepare Transparent Technical Report
- Prepare Transparent Technical Report Actively Participate in Risk Assessment
- Actively Participate in Risk Assessment Unambiguously Express Safety Concerns
- Unambiguously Express Safety Concerns Explore Additional Technical Mitigation Options
- Explore Additional Technical Mitigation Options Propose Further Study Before Deployment
- Propose Further Study Before Deployment Safety-Critical_Software_Identified
- conflict_1 decision_1
- conflict_1 decision_2
- conflict_1 decision_3
- conflict_1 decision_4
- conflict_1 decision_5
- conflict_1 decision_6
- conflict_2 decision_1
- conflict_2 decision_2
- conflict_2 decision_3
- conflict_2 decision_4
- conflict_2 decision_5
- conflict_2 decision_6
Key Takeaways
- When regulatory frameworks have not yet caught up to emerging technology, engineers bear a heightened personal obligation to surface ethical concerns rather than defaulting to compliance silence.
- The prime directive of harm minimization cannot be subordinated to client commercial interests or passenger-priority algorithms when third-party fatal harm is a foreseeable outcome.
- A stalemate resolution signals that the board identified irreconcilable competing duties, meaning the engineer's obligation defaults to the most protective principle — public safety — as the irreducible floor.