We aren’t supposed to memorize answers for our tests; this is because route memorization doesn’t actually allow us to understand and learn whatever the subject matter is. Douthwaite defines a technological fix as “An attempt to answer a social or human problem using technological devices or systems without any attempt to modify or alter the underlying social or human problem” (Douthwaite, 31) Essentially Douthwaite values a fix regardless of how temporary. The hope is that these fixes will provide time to develop an actual solution within the society. Using technology to buy ourselves time seems like a pleasant and beneficial notion when presented by Douthwaite alone however some argue that temporary fixes can do more harm than good.
In a sentence Michael Reidy may as well have written, Johnson states that an early proponent for the technological fix was “naively confident about the outcomes of science (…) tending to narrowly define the complexity of problems (‘reductionistic’)” (Johnston, 54) Johnston’s critique is that relying on the technological fix puts engineers into a position of moral authority. Acting in accordance with a utilitarian framework they cannot possibly understand all of the reproductions that their fixes will have on all parties involved. This argument parallels our understanding of a trophic cascade. A technological cascade that effects a human society much more immediately and directly than a trophic cascade we may observe in nature. The issue with a utilitarian band aid doctor is that they will consistently choose to fix the problem in the simplest way possible. This type of solution, Johnston would argue, does not value the individual as much as it values the majority. The example of offering ‘Intrauterine devices (IUDs) to limit family size and economic deprivation.” (Johnston, 50) show the far-reaching concern the greater good as opposed to individual choice.
Huesemann and Huesemann take this point farther by saying that those same ‘solutions’ will have unforeseen negative consequences. They point out that humans have never been able to improve upon nature (Huesemann, 5). They reframe the situation as humans necessarily having hubris in order to believe that they can have only a positive impact on a given ecosystem. It can be inferred then, that some people being given the authority to try to impact humanity and expect only positive results is simply unrealistic.
Yes, unintended consequences should prevent us from finding technological solutions to our problems. I’m not yet convinced that technological fixes are powerful enough for their outcomes to be accepted and abided by. Humans never turn back around from the progress we make, or rather, we refuse to do so. Maybe the only solution is to work through a complicated, messy, largescale, human issue rather than letting this problem stew until it eventually boils over whatever technical solution may have contained it. I attended Dr. de Melo-Martin’s talk on ‘Creating Perfect Babies’. She spoke on the increasing relevance of gene editing in embryonic humans and why the science itself needs to at least be reevaluated and appropriately understood before it continues, so that we don’t end up doing significant harm. Women are an obvious example, as their bodies and organs would be needed to develop a process that does not yet acknowledge the basic fact that men and women would be affected differently by this new technology.