Douthwaite believes that using technology to fixes the problems within society is both the best and only true way. However, others, such as Johnstone and Huesemann and Huesemann, believe that that belief is naive and doesn’t consider other important factors involved in these issues that are trying to be fixed. Johnstone notes that certain social rules can become unnecessary in proper design is put in place, but that often, engineers and scientists have a very different way of viewing a problem than a politician or advocate. For example, they “underestimate societal side-effects” (Johnstone 52,) and focus more on the final outcome, with the goal being to maximize positive effects. Such a view is good within engineering but overlooks or doesn’t understand the complicated ethical dilemma that becomes if it only helps a certain subgroup of people. It could also be that not having a full understanding of the intricacies of the problem could lead them to not offering a solution that works for the real problem. In short, the designers of these technological fixes need to consider certain aspects far more than they do, including widening their scope of understanding and planning for more longevity within their solution, as well as who could be negatively or neutrally affected, and how their solutions would fit with the values and habits of the society, even as they change. Huesemann and Huesemann take on a far more dire stance. In their case, it isn’t even that they think these ‘technological solution creators’ need to consider far more variables in their designs, but that it is impossible to design something which can truly fix a problem. They mention how trying to manipulate the course of natural selection can lead to far greater consequences later on, that even technology which appears positive is really just a sum-zero game because it will have negative effects that aren’t immediately apparent, and that the environments ability to tolerate all of this manipulation is finite. They believe that Douthwaite’s view is naive, because humans don’t actually understand the world, as well as they, like to think, and sum up their piece with the statement that “the negative… consequences brought about by the application of science and technology are not only inherently unavoidable but also intrinsically unpredictable” (Huesemann and Huesemann 15.)
As was discussed by Huesemann and Huesemann as well as Johnstone, a big reason why using technology to fix social and environmental systems can have extremely bad side effects is because it is, in practical terms, impossible to account for every variable, especially in a system as complex as societies and ecosystems which are prone to evolving. It is difficult enough to find a short-term solution, but because of how these variables can change, in part because of the ‘fix’, finding a solution that doesn’t turn net-negative at some point just isn’t practical. That being the case, these solutions can still have value, especially in circumstances where the fallout of not using any possible kind of fix (say, a natural (or unnatural) disaster) could lead to an immediate and significant loss of life. The argument could be made that we’ve gotten ourselves into this mess of fixing everything with technology because of previous reliance upon it, but despite the fact that unintended consequences could occur from our continued reliance, it seems doubtful that anyone would be willing to let something negative happen to their environment or social group if they were aware there was a way to stop it, even if there could be greater consequences later on.