A server development company knows that a vulnerability exists in the management software for their new line of servers. Because of the severity of the vulnerability the software would need to be rewritten completely which would set back the release for months. The vulnerability could end up exposing the data on the servers and disclosing personal information for potential end-users. The software team doesn’t think the vulnerability will be found before they can rewrite the software and companies could mitigate the issue by leaving it disabled. Multiple companies have already made plans to upgrade their equipment on release and have invested large amounts of money around the current release date. The organization plans to release the servers with the issue and rely on their customers to not enable the vulnerable software. Is that the correct course of action?
I believe utility ethics would consider the weight of delaying the release as it would cost both the company and their customers a tremendous amount of money. The vulnerability is not easily identified and since it’s a new product the chance of anyone exploiting the problem so quickly would be very low. A utility ethics view may decide that releasing the servers would be better run as it would ensure the companies do not experience large financial losses, possibly costing people their jobs immediately.
A duty ethics view would abstain from releasing the servers until the issue was resolved. Putting the personal data of users at risk violates both their right to privacy and their right to safety. While it would cost a lot of money and could affect the companies involved it would violate their duty to release an unsafe product, even if they warned customers not to use the vulnerable features. Simply allowing a risk to exist goes against their duty to ensure the individual rights of people.
By employing a virtuous ethical view, they might consider the actions of other companies with high reputations. Another organization may have come across a similar situation where they could consider their decisions and outcomes. They may have followed the example of another company that delayed the release for a shorter period of time by replacing the vulnerable software with a secure but feature-incomplete version that they would finish at a later time. This would not violate their virtues and would negate the risk to end users.
Though these three theories alone may come up with different solutions, I believe the best solution would need to incorporate all 3 theories.